Test Report: KVM_Linux_crio 20385

                    
                      693540c0733dd51efa818bcfa77a0c31e0bd95f4:2025-02-10:38290
                    
                

Test fail (10/327)

x
+
TestAddons/parallel/Ingress (154.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-176336 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-176336 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-176336 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [58e0a91e-67c4-4ecb-b465-8d8abb30521e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [58e0a91e-67c4-4ecb-b465-8d8abb30521e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003654967s
I0210 10:36:42.974956  116470 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-176336 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.901776591s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-176336 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.19
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-176336 -n addons-176336
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 logs -n 25: (1.245600206s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-183974                                                                     | download-only-183974 | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC | 10 Feb 25 10:33 UTC |
	| delete  | -p download-only-052291                                                                     | download-only-052291 | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC | 10 Feb 25 10:33 UTC |
	| delete  | -p download-only-183974                                                                     | download-only-183974 | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC | 10 Feb 25 10:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-335395 | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC |                     |
	|         | binary-mirror-335395                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45531                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-335395                                                                     | binary-mirror-335395 | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC | 10 Feb 25 10:33 UTC |
	| addons  | enable dashboard -p                                                                         | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC |                     |
	|         | addons-176336                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC |                     |
	|         | addons-176336                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-176336 --wait=true                                                                | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC | 10 Feb 25 10:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-176336 addons disable                                                                | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:35 UTC | 10 Feb 25 10:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-176336 addons disable                                                                | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | -p addons-176336                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-176336 addons                                                                        | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-176336 addons                                                                        | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-176336 addons                                                                        | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-176336 addons disable                                                                | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-176336 ip                                                                            | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	| addons  | addons-176336 addons disable                                                                | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-176336 ssh cat                                                                       | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | /opt/local-path-provisioner/pvc-900f7ab4-d741-40ec-972f-db46b21c9e8e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-176336 addons disable                                                                | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-176336 addons                                                                        | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-176336 ssh curl -s                                                                   | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-176336 addons disable                                                                | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:36 UTC | 10 Feb 25 10:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-176336 addons                                                                        | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:37 UTC | 10 Feb 25 10:37 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-176336 addons                                                                        | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:37 UTC | 10 Feb 25 10:37 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-176336 ip                                                                            | addons-176336        | jenkins | v1.35.0 | 10 Feb 25 10:38 UTC | 10 Feb 25 10:38 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:33:33
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:33:33.828692  117164 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:33:33.828826  117164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:33:33.828837  117164 out.go:358] Setting ErrFile to fd 2...
	I0210 10:33:33.828841  117164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:33:33.829053  117164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 10:33:33.829753  117164 out.go:352] Setting JSON to false
	I0210 10:33:33.830648  117164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4556,"bootTime":1739179058,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:33:33.830762  117164 start.go:139] virtualization: kvm guest
	I0210 10:33:33.832618  117164 out.go:177] * [addons-176336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 10:33:33.833882  117164 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:33:33.833883  117164 notify.go:220] Checking for updates...
	I0210 10:33:33.834962  117164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:33:33.836040  117164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 10:33:33.837081  117164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 10:33:33.838051  117164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 10:33:33.838994  117164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:33:33.840342  117164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:33:33.872054  117164 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 10:33:33.873039  117164 start.go:297] selected driver: kvm2
	I0210 10:33:33.873054  117164 start.go:901] validating driver "kvm2" against <nil>
	I0210 10:33:33.873067  117164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:33:33.873801  117164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:33:33.873883  117164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 10:33:33.888421  117164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 10:33:33.888480  117164 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:33:33.888800  117164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 10:33:33.888838  117164 cni.go:84] Creating CNI manager for ""
	I0210 10:33:33.888892  117164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 10:33:33.888901  117164 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 10:33:33.888965  117164 start.go:340] cluster config:
	{Name:addons-176336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-176336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0210 10:33:33.889076  117164 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:33:33.890474  117164 out.go:177] * Starting "addons-176336" primary control-plane node in "addons-176336" cluster
	I0210 10:33:33.891475  117164 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 10:33:33.891511  117164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 10:33:33.891525  117164 cache.go:56] Caching tarball of preloaded images
	I0210 10:33:33.891603  117164 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 10:33:33.891617  117164 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 10:33:33.891973  117164 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/config.json ...
	I0210 10:33:33.891997  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/config.json: {Name:mk9d7f61bb53b3d4be04229fb2a8898456803d21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:33:33.892169  117164 start.go:360] acquireMachinesLock for addons-176336: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 10:33:33.892231  117164 start.go:364] duration metric: took 44.701µs to acquireMachinesLock for "addons-176336"
	I0210 10:33:33.892253  117164 start.go:93] Provisioning new machine with config: &{Name:addons-176336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-176336 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 10:33:33.892317  117164 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 10:33:33.894538  117164 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0210 10:33:33.894813  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:33:33.894868  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:33:33.909672  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I0210 10:33:33.910080  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:33:33.910638  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:33:33.910667  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:33:33.910990  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:33:33.911233  117164 main.go:141] libmachine: (addons-176336) Calling .GetMachineName
	I0210 10:33:33.911364  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:33:33.911518  117164 start.go:159] libmachine.API.Create for "addons-176336" (driver="kvm2")
	I0210 10:33:33.911546  117164 client.go:168] LocalClient.Create starting
	I0210 10:33:33.911580  117164 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem
	I0210 10:33:34.089716  117164 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem
	I0210 10:33:34.300346  117164 main.go:141] libmachine: Running pre-create checks...
	I0210 10:33:34.300373  117164 main.go:141] libmachine: (addons-176336) Calling .PreCreateCheck
	I0210 10:33:34.300835  117164 main.go:141] libmachine: (addons-176336) Calling .GetConfigRaw
	I0210 10:33:34.301280  117164 main.go:141] libmachine: Creating machine...
	I0210 10:33:34.301295  117164 main.go:141] libmachine: (addons-176336) Calling .Create
	I0210 10:33:34.301417  117164 main.go:141] libmachine: (addons-176336) creating KVM machine...
	I0210 10:33:34.301428  117164 main.go:141] libmachine: (addons-176336) creating network...
	I0210 10:33:34.302723  117164 main.go:141] libmachine: (addons-176336) DBG | found existing default KVM network
	I0210 10:33:34.303527  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:34.303384  117187 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015cc0}
	I0210 10:33:34.303548  117164 main.go:141] libmachine: (addons-176336) DBG | created network xml: 
	I0210 10:33:34.303556  117164 main.go:141] libmachine: (addons-176336) DBG | <network>
	I0210 10:33:34.303561  117164 main.go:141] libmachine: (addons-176336) DBG |   <name>mk-addons-176336</name>
	I0210 10:33:34.303567  117164 main.go:141] libmachine: (addons-176336) DBG |   <dns enable='no'/>
	I0210 10:33:34.303571  117164 main.go:141] libmachine: (addons-176336) DBG |   
	I0210 10:33:34.303579  117164 main.go:141] libmachine: (addons-176336) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0210 10:33:34.303589  117164 main.go:141] libmachine: (addons-176336) DBG |     <dhcp>
	I0210 10:33:34.303602  117164 main.go:141] libmachine: (addons-176336) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0210 10:33:34.303608  117164 main.go:141] libmachine: (addons-176336) DBG |     </dhcp>
	I0210 10:33:34.303629  117164 main.go:141] libmachine: (addons-176336) DBG |   </ip>
	I0210 10:33:34.303646  117164 main.go:141] libmachine: (addons-176336) DBG |   
	I0210 10:33:34.303651  117164 main.go:141] libmachine: (addons-176336) DBG | </network>
	I0210 10:33:34.303656  117164 main.go:141] libmachine: (addons-176336) DBG | 
	I0210 10:33:34.308776  117164 main.go:141] libmachine: (addons-176336) DBG | trying to create private KVM network mk-addons-176336 192.168.39.0/24...
	I0210 10:33:34.376848  117164 main.go:141] libmachine: (addons-176336) DBG | private KVM network mk-addons-176336 192.168.39.0/24 created
	I0210 10:33:34.376891  117164 main.go:141] libmachine: (addons-176336) setting up store path in /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336 ...
	I0210 10:33:34.376913  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:34.376820  117187 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 10:33:34.376932  117164 main.go:141] libmachine: (addons-176336) building disk image from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 10:33:34.376995  117164 main.go:141] libmachine: (addons-176336) Downloading /home/jenkins/minikube-integration/20385-109271/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 10:33:34.672687  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:34.672555  117187 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa...
	I0210 10:33:34.747603  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:34.747469  117187 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/addons-176336.rawdisk...
	I0210 10:33:34.747632  117164 main.go:141] libmachine: (addons-176336) DBG | Writing magic tar header
	I0210 10:33:34.747643  117164 main.go:141] libmachine: (addons-176336) DBG | Writing SSH key tar header
	I0210 10:33:34.747650  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:34.747597  117187 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336 ...
	I0210 10:33:34.747713  117164 main.go:141] libmachine: (addons-176336) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336
	I0210 10:33:34.747755  117164 main.go:141] libmachine: (addons-176336) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336 (perms=drwx------)
	I0210 10:33:34.747772  117164 main.go:141] libmachine: (addons-176336) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines (perms=drwxr-xr-x)
	I0210 10:33:34.747783  117164 main.go:141] libmachine: (addons-176336) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines
	I0210 10:33:34.747798  117164 main.go:141] libmachine: (addons-176336) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 10:33:34.747816  117164 main.go:141] libmachine: (addons-176336) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271
	I0210 10:33:34.747827  117164 main.go:141] libmachine: (addons-176336) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube (perms=drwxr-xr-x)
	I0210 10:33:34.747837  117164 main.go:141] libmachine: (addons-176336) setting executable bit set on /home/jenkins/minikube-integration/20385-109271 (perms=drwxrwxr-x)
	I0210 10:33:34.747842  117164 main.go:141] libmachine: (addons-176336) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 10:33:34.747852  117164 main.go:141] libmachine: (addons-176336) DBG | checking permissions on dir: /home/jenkins
	I0210 10:33:34.747858  117164 main.go:141] libmachine: (addons-176336) DBG | checking permissions on dir: /home
	I0210 10:33:34.747868  117164 main.go:141] libmachine: (addons-176336) DBG | skipping /home - not owner
	I0210 10:33:34.747879  117164 main.go:141] libmachine: (addons-176336) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 10:33:34.747897  117164 main.go:141] libmachine: (addons-176336) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 10:33:34.747907  117164 main.go:141] libmachine: (addons-176336) creating domain...
	I0210 10:33:34.749034  117164 main.go:141] libmachine: (addons-176336) define libvirt domain using xml: 
	I0210 10:33:34.749059  117164 main.go:141] libmachine: (addons-176336) <domain type='kvm'>
	I0210 10:33:34.749068  117164 main.go:141] libmachine: (addons-176336)   <name>addons-176336</name>
	I0210 10:33:34.749082  117164 main.go:141] libmachine: (addons-176336)   <memory unit='MiB'>4000</memory>
	I0210 10:33:34.749090  117164 main.go:141] libmachine: (addons-176336)   <vcpu>2</vcpu>
	I0210 10:33:34.749095  117164 main.go:141] libmachine: (addons-176336)   <features>
	I0210 10:33:34.749100  117164 main.go:141] libmachine: (addons-176336)     <acpi/>
	I0210 10:33:34.749104  117164 main.go:141] libmachine: (addons-176336)     <apic/>
	I0210 10:33:34.749110  117164 main.go:141] libmachine: (addons-176336)     <pae/>
	I0210 10:33:34.749114  117164 main.go:141] libmachine: (addons-176336)     
	I0210 10:33:34.749119  117164 main.go:141] libmachine: (addons-176336)   </features>
	I0210 10:33:34.749127  117164 main.go:141] libmachine: (addons-176336)   <cpu mode='host-passthrough'>
	I0210 10:33:34.749133  117164 main.go:141] libmachine: (addons-176336)   
	I0210 10:33:34.749141  117164 main.go:141] libmachine: (addons-176336)   </cpu>
	I0210 10:33:34.749146  117164 main.go:141] libmachine: (addons-176336)   <os>
	I0210 10:33:34.749153  117164 main.go:141] libmachine: (addons-176336)     <type>hvm</type>
	I0210 10:33:34.749175  117164 main.go:141] libmachine: (addons-176336)     <boot dev='cdrom'/>
	I0210 10:33:34.749193  117164 main.go:141] libmachine: (addons-176336)     <boot dev='hd'/>
	I0210 10:33:34.749207  117164 main.go:141] libmachine: (addons-176336)     <bootmenu enable='no'/>
	I0210 10:33:34.749217  117164 main.go:141] libmachine: (addons-176336)   </os>
	I0210 10:33:34.749228  117164 main.go:141] libmachine: (addons-176336)   <devices>
	I0210 10:33:34.749239  117164 main.go:141] libmachine: (addons-176336)     <disk type='file' device='cdrom'>
	I0210 10:33:34.749250  117164 main.go:141] libmachine: (addons-176336)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/boot2docker.iso'/>
	I0210 10:33:34.749258  117164 main.go:141] libmachine: (addons-176336)       <target dev='hdc' bus='scsi'/>
	I0210 10:33:34.749263  117164 main.go:141] libmachine: (addons-176336)       <readonly/>
	I0210 10:33:34.749269  117164 main.go:141] libmachine: (addons-176336)     </disk>
	I0210 10:33:34.749278  117164 main.go:141] libmachine: (addons-176336)     <disk type='file' device='disk'>
	I0210 10:33:34.749290  117164 main.go:141] libmachine: (addons-176336)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 10:33:34.749306  117164 main.go:141] libmachine: (addons-176336)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/addons-176336.rawdisk'/>
	I0210 10:33:34.749321  117164 main.go:141] libmachine: (addons-176336)       <target dev='hda' bus='virtio'/>
	I0210 10:33:34.749331  117164 main.go:141] libmachine: (addons-176336)     </disk>
	I0210 10:33:34.749342  117164 main.go:141] libmachine: (addons-176336)     <interface type='network'>
	I0210 10:33:34.749349  117164 main.go:141] libmachine: (addons-176336)       <source network='mk-addons-176336'/>
	I0210 10:33:34.749354  117164 main.go:141] libmachine: (addons-176336)       <model type='virtio'/>
	I0210 10:33:34.749361  117164 main.go:141] libmachine: (addons-176336)     </interface>
	I0210 10:33:34.749372  117164 main.go:141] libmachine: (addons-176336)     <interface type='network'>
	I0210 10:33:34.749382  117164 main.go:141] libmachine: (addons-176336)       <source network='default'/>
	I0210 10:33:34.749392  117164 main.go:141] libmachine: (addons-176336)       <model type='virtio'/>
	I0210 10:33:34.749400  117164 main.go:141] libmachine: (addons-176336)     </interface>
	I0210 10:33:34.749413  117164 main.go:141] libmachine: (addons-176336)     <serial type='pty'>
	I0210 10:33:34.749430  117164 main.go:141] libmachine: (addons-176336)       <target port='0'/>
	I0210 10:33:34.749438  117164 main.go:141] libmachine: (addons-176336)     </serial>
	I0210 10:33:34.749453  117164 main.go:141] libmachine: (addons-176336)     <console type='pty'>
	I0210 10:33:34.749465  117164 main.go:141] libmachine: (addons-176336)       <target type='serial' port='0'/>
	I0210 10:33:34.749473  117164 main.go:141] libmachine: (addons-176336)     </console>
	I0210 10:33:34.749487  117164 main.go:141] libmachine: (addons-176336)     <rng model='virtio'>
	I0210 10:33:34.749500  117164 main.go:141] libmachine: (addons-176336)       <backend model='random'>/dev/random</backend>
	I0210 10:33:34.749513  117164 main.go:141] libmachine: (addons-176336)     </rng>
	I0210 10:33:34.749520  117164 main.go:141] libmachine: (addons-176336)     
	I0210 10:33:34.749528  117164 main.go:141] libmachine: (addons-176336)     
	I0210 10:33:34.749535  117164 main.go:141] libmachine: (addons-176336)   </devices>
	I0210 10:33:34.749547  117164 main.go:141] libmachine: (addons-176336) </domain>
	I0210 10:33:34.749585  117164 main.go:141] libmachine: (addons-176336) 
	I0210 10:33:34.754855  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:cc:62:8c in network default
	I0210 10:33:34.755459  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:34.755485  117164 main.go:141] libmachine: (addons-176336) starting domain...
	I0210 10:33:34.755493  117164 main.go:141] libmachine: (addons-176336) ensuring networks are active...
	I0210 10:33:34.756123  117164 main.go:141] libmachine: (addons-176336) Ensuring network default is active
	I0210 10:33:34.756480  117164 main.go:141] libmachine: (addons-176336) Ensuring network mk-addons-176336 is active
	I0210 10:33:34.757868  117164 main.go:141] libmachine: (addons-176336) getting domain XML...
	I0210 10:33:34.758534  117164 main.go:141] libmachine: (addons-176336) creating domain...
	I0210 10:33:36.124445  117164 main.go:141] libmachine: (addons-176336) waiting for IP...
	I0210 10:33:36.125126  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:36.125466  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:36.125527  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:36.125466  117187 retry.go:31] will retry after 305.025369ms: waiting for domain to come up
	I0210 10:33:36.431977  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:36.432370  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:36.432403  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:36.432360  117187 retry.go:31] will retry after 376.376552ms: waiting for domain to come up
	I0210 10:33:36.810052  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:36.810465  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:36.810492  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:36.810450  117187 retry.go:31] will retry after 443.241561ms: waiting for domain to come up
	I0210 10:33:37.255060  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:37.255512  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:37.255545  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:37.255468  117187 retry.go:31] will retry after 566.320266ms: waiting for domain to come up
	I0210 10:33:37.823212  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:37.823639  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:37.823668  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:37.823616  117187 retry.go:31] will retry after 568.137457ms: waiting for domain to come up
	I0210 10:33:38.393378  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:38.393834  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:38.393861  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:38.393790  117187 retry.go:31] will retry after 858.35974ms: waiting for domain to come up
	I0210 10:33:39.253321  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:39.253741  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:39.253808  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:39.253732  117187 retry.go:31] will retry after 1.16527126s: waiting for domain to come up
	I0210 10:33:40.420243  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:40.420745  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:40.420774  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:40.420705  117187 retry.go:31] will retry after 1.177655604s: waiting for domain to come up
	I0210 10:33:41.599957  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:41.600523  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:41.600543  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:41.600452  117187 retry.go:31] will retry after 1.470847758s: waiting for domain to come up
	I0210 10:33:43.073048  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:43.073464  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:43.073487  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:43.073434  117187 retry.go:31] will retry after 1.528853884s: waiting for domain to come up
	I0210 10:33:44.603815  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:44.604282  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:44.604347  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:44.604206  117187 retry.go:31] will retry after 2.052310221s: waiting for domain to come up
	I0210 10:33:46.659433  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:46.659939  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:46.659974  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:46.659921  117187 retry.go:31] will retry after 3.589500767s: waiting for domain to come up
	I0210 10:33:50.250998  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:50.251454  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:50.251486  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:50.251409  117187 retry.go:31] will retry after 3.390520045s: waiting for domain to come up
	I0210 10:33:53.644334  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:53.644764  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find current IP address of domain addons-176336 in network mk-addons-176336
	I0210 10:33:53.644800  117164 main.go:141] libmachine: (addons-176336) DBG | I0210 10:33:53.644698  117187 retry.go:31] will retry after 5.476994096s: waiting for domain to come up
	I0210 10:33:59.123568  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.124038  117164 main.go:141] libmachine: (addons-176336) found domain IP: 192.168.39.19
	I0210 10:33:59.124085  117164 main.go:141] libmachine: (addons-176336) reserving static IP address...
	I0210 10:33:59.124105  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has current primary IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.124508  117164 main.go:141] libmachine: (addons-176336) DBG | unable to find host DHCP lease matching {name: "addons-176336", mac: "52:54:00:52:46:17", ip: "192.168.39.19"} in network mk-addons-176336
	I0210 10:33:59.194218  117164 main.go:141] libmachine: (addons-176336) DBG | Getting to WaitForSSH function...
	I0210 10:33:59.194245  117164 main.go:141] libmachine: (addons-176336) reserved static IP address 192.168.39.19 for domain addons-176336
	I0210 10:33:59.194262  117164 main.go:141] libmachine: (addons-176336) waiting for SSH...
	I0210 10:33:59.196494  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.196859  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:46:17}
	I0210 10:33:59.196882  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.197095  117164 main.go:141] libmachine: (addons-176336) DBG | Using SSH client type: external
	I0210 10:33:59.197125  117164 main.go:141] libmachine: (addons-176336) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa (-rw-------)
	I0210 10:33:59.197145  117164 main.go:141] libmachine: (addons-176336) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 10:33:59.197156  117164 main.go:141] libmachine: (addons-176336) DBG | About to run SSH command:
	I0210 10:33:59.197164  117164 main.go:141] libmachine: (addons-176336) DBG | exit 0
	I0210 10:33:59.322833  117164 main.go:141] libmachine: (addons-176336) DBG | SSH cmd err, output: <nil>: 
	I0210 10:33:59.323095  117164 main.go:141] libmachine: (addons-176336) KVM machine creation complete
	I0210 10:33:59.323460  117164 main.go:141] libmachine: (addons-176336) Calling .GetConfigRaw
	I0210 10:33:59.323996  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:33:59.324182  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:33:59.324306  117164 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 10:33:59.324318  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:33:59.325565  117164 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 10:33:59.325578  117164 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 10:33:59.325586  117164 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 10:33:59.325592  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:33:59.328023  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.328376  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:33:59.328410  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.328562  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:33:59.328738  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.328905  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.329006  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:33:59.329155  117164 main.go:141] libmachine: Using SSH client type: native
	I0210 10:33:59.329368  117164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0210 10:33:59.329385  117164 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 10:33:59.430532  117164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 10:33:59.430560  117164 main.go:141] libmachine: Detecting the provisioner...
	I0210 10:33:59.430568  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:33:59.433538  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.433901  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:33:59.433937  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.434136  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:33:59.434326  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.434510  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.434681  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:33:59.434824  117164 main.go:141] libmachine: Using SSH client type: native
	I0210 10:33:59.434988  117164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0210 10:33:59.434998  117164 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 10:33:59.539568  117164 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 10:33:59.539655  117164 main.go:141] libmachine: found compatible host: buildroot
	I0210 10:33:59.539662  117164 main.go:141] libmachine: Provisioning with buildroot...
	I0210 10:33:59.539671  117164 main.go:141] libmachine: (addons-176336) Calling .GetMachineName
	I0210 10:33:59.539929  117164 buildroot.go:166] provisioning hostname "addons-176336"
	I0210 10:33:59.539961  117164 main.go:141] libmachine: (addons-176336) Calling .GetMachineName
	I0210 10:33:59.540180  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:33:59.542946  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.543334  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:33:59.543364  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.543505  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:33:59.543683  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.543845  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.543947  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:33:59.544074  117164 main.go:141] libmachine: Using SSH client type: native
	I0210 10:33:59.544254  117164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0210 10:33:59.544267  117164 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-176336 && echo "addons-176336" | sudo tee /etc/hostname
	I0210 10:33:59.656127  117164 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-176336
	
	I0210 10:33:59.656155  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:33:59.660079  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.660515  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:33:59.660546  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.660731  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:33:59.660926  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.661055  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:33:59.661235  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:33:59.661388  117164 main.go:141] libmachine: Using SSH client type: native
	I0210 10:33:59.661581  117164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0210 10:33:59.661604  117164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-176336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-176336/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-176336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 10:33:59.767272  117164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 10:33:59.767311  117164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 10:33:59.767338  117164 buildroot.go:174] setting up certificates
	I0210 10:33:59.767352  117164 provision.go:84] configureAuth start
	I0210 10:33:59.767366  117164 main.go:141] libmachine: (addons-176336) Calling .GetMachineName
	I0210 10:33:59.767679  117164 main.go:141] libmachine: (addons-176336) Calling .GetIP
	I0210 10:33:59.770466  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.770882  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:33:59.770910  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.771062  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:33:59.773612  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.773946  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:33:59.773971  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:33:59.774098  117164 provision.go:143] copyHostCerts
	I0210 10:33:59.774188  117164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 10:33:59.774318  117164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 10:33:59.774410  117164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 10:33:59.774483  117164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.addons-176336 san=[127.0.0.1 192.168.39.19 addons-176336 localhost minikube]
	I0210 10:34:00.065432  117164 provision.go:177] copyRemoteCerts
	I0210 10:34:00.065500  117164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 10:34:00.065527  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:00.068658  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.068945  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.068989  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.069168  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:00.069355  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.069577  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:00.069721  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:00.148566  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 10:34:00.170395  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 10:34:00.194008  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 10:34:00.217823  117164 provision.go:87] duration metric: took 450.456641ms to configureAuth
	I0210 10:34:00.217850  117164 buildroot.go:189] setting minikube options for container-runtime
	I0210 10:34:00.218053  117164 config.go:182] Loaded profile config "addons-176336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 10:34:00.218168  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:00.220932  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.221220  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.221252  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.221431  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:00.221607  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.221780  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.221938  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:00.222187  117164 main.go:141] libmachine: Using SSH client type: native
	I0210 10:34:00.222343  117164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0210 10:34:00.222358  117164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 10:34:00.435847  117164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 10:34:00.435877  117164 main.go:141] libmachine: Checking connection to Docker...
	I0210 10:34:00.435885  117164 main.go:141] libmachine: (addons-176336) Calling .GetURL
	I0210 10:34:00.437327  117164 main.go:141] libmachine: (addons-176336) DBG | using libvirt version 6000000
	I0210 10:34:00.439426  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.439699  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.439730  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.439903  117164 main.go:141] libmachine: Docker is up and running!
	I0210 10:34:00.439917  117164 main.go:141] libmachine: Reticulating splines...
	I0210 10:34:00.439927  117164 client.go:171] duration metric: took 26.528370651s to LocalClient.Create
	I0210 10:34:00.439955  117164 start.go:167] duration metric: took 26.528436729s to libmachine.API.Create "addons-176336"
	I0210 10:34:00.439980  117164 start.go:293] postStartSetup for "addons-176336" (driver="kvm2")
	I0210 10:34:00.439996  117164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 10:34:00.440019  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:00.440262  117164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 10:34:00.440297  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:00.442234  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.442563  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.442590  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.442726  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:00.442902  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.443039  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:00.443207  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:00.521221  117164 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 10:34:00.525194  117164 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 10:34:00.525220  117164 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 10:34:00.525312  117164 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 10:34:00.525346  117164 start.go:296] duration metric: took 85.354828ms for postStartSetup
	I0210 10:34:00.525451  117164 main.go:141] libmachine: (addons-176336) Calling .GetConfigRaw
	I0210 10:34:00.526089  117164 main.go:141] libmachine: (addons-176336) Calling .GetIP
	I0210 10:34:00.528819  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.529153  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.529183  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.529363  117164 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/config.json ...
	I0210 10:34:00.529578  117164 start.go:128] duration metric: took 26.63724824s to createHost
	I0210 10:34:00.529617  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:00.531688  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.531922  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.531955  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.532065  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:00.532250  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.532385  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.532577  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:00.532751  117164 main.go:141] libmachine: Using SSH client type: native
	I0210 10:34:00.532903  117164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0210 10:34:00.532913  117164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 10:34:00.631714  117164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739183640.609950895
	
	I0210 10:34:00.631750  117164 fix.go:216] guest clock: 1739183640.609950895
	I0210 10:34:00.631762  117164 fix.go:229] Guest: 2025-02-10 10:34:00.609950895 +0000 UTC Remote: 2025-02-10 10:34:00.529602896 +0000 UTC m=+26.737940026 (delta=80.347999ms)
	I0210 10:34:00.631798  117164 fix.go:200] guest clock delta is within tolerance: 80.347999ms
	I0210 10:34:00.631806  117164 start.go:83] releasing machines lock for "addons-176336", held for 26.739561679s
	I0210 10:34:00.631855  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:00.632119  117164 main.go:141] libmachine: (addons-176336) Calling .GetIP
	I0210 10:34:00.634543  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.634876  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.634916  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.635032  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:00.635549  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:00.635706  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:00.635812  117164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 10:34:00.635857  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:00.635904  117164 ssh_runner.go:195] Run: cat /version.json
	I0210 10:34:00.635931  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:00.638255  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.638588  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.638616  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.638638  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.638756  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:00.638917  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.638962  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:00.638991  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:00.639040  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:00.639165  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:00.639250  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:00.639291  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:00.639421  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:00.639545  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:00.747375  117164 ssh_runner.go:195] Run: systemctl --version
	I0210 10:34:00.753441  117164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 10:34:00.909954  117164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 10:34:00.915518  117164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 10:34:00.915592  117164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 10:34:00.929988  117164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 10:34:00.930011  117164 start.go:495] detecting cgroup driver to use...
	I0210 10:34:00.930064  117164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 10:34:00.944793  117164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 10:34:00.957969  117164 docker.go:217] disabling cri-docker service (if available) ...
	I0210 10:34:00.958029  117164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 10:34:00.970401  117164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 10:34:00.982802  117164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 10:34:01.095985  117164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 10:34:01.258859  117164 docker.go:233] disabling docker service ...
	I0210 10:34:01.258941  117164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 10:34:01.272469  117164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 10:34:01.284996  117164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 10:34:01.403287  117164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 10:34:01.524845  117164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 10:34:01.538550  117164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 10:34:01.555299  117164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 10:34:01.555376  117164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 10:34:01.564445  117164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 10:34:01.564513  117164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 10:34:01.573658  117164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 10:34:01.582486  117164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 10:34:01.591242  117164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 10:34:01.600544  117164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 10:34:01.609583  117164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 10:34:01.624687  117164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 10:34:01.633898  117164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 10:34:01.642134  117164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 10:34:01.642182  117164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 10:34:01.653670  117164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 10:34:01.662222  117164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:34:01.776386  117164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 10:34:01.856203  117164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 10:34:01.856298  117164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 10:34:01.861117  117164 start.go:563] Will wait 60s for crictl version
	I0210 10:34:01.861198  117164 ssh_runner.go:195] Run: which crictl
	I0210 10:34:01.864678  117164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 10:34:01.904521  117164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 10:34:01.904633  117164 ssh_runner.go:195] Run: crio --version
	I0210 10:34:01.930915  117164 ssh_runner.go:195] Run: crio --version
	I0210 10:34:01.958428  117164 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 10:34:01.959554  117164 main.go:141] libmachine: (addons-176336) Calling .GetIP
	I0210 10:34:01.962091  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:01.962450  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:01.962481  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:01.962668  117164 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 10:34:01.966493  117164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 10:34:01.979794  117164 kubeadm.go:883] updating cluster {Name:addons-176336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-176336 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 10:34:01.979896  117164 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 10:34:01.979935  117164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 10:34:02.015531  117164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 10:34:02.015597  117164 ssh_runner.go:195] Run: which lz4
	I0210 10:34:02.019524  117164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 10:34:02.023473  117164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 10:34:02.023506  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 10:34:03.123342  117164 crio.go:462] duration metric: took 1.103837795s to copy over tarball
	I0210 10:34:03.123426  117164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 10:34:05.223026  117164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.099561151s)
	I0210 10:34:05.223070  117164 crio.go:469] duration metric: took 2.099694716s to extract the tarball
	I0210 10:34:05.223082  117164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 10:34:05.259366  117164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 10:34:05.298192  117164 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 10:34:05.298228  117164 cache_images.go:84] Images are preloaded, skipping loading
	I0210 10:34:05.298239  117164 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.32.1 crio true true} ...
	I0210 10:34:05.298346  117164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-176336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-176336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 10:34:05.298519  117164 ssh_runner.go:195] Run: crio config
	I0210 10:34:05.339436  117164 cni.go:84] Creating CNI manager for ""
	I0210 10:34:05.339468  117164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 10:34:05.339481  117164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 10:34:05.339512  117164 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-176336 NodeName:addons-176336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 10:34:05.339663  117164 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-176336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 10:34:05.339737  117164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 10:34:05.349658  117164 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 10:34:05.349730  117164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 10:34:05.358228  117164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0210 10:34:05.372854  117164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 10:34:05.387378  117164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0210 10:34:05.402506  117164 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I0210 10:34:05.405946  117164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 10:34:05.416521  117164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:34:05.532950  117164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 10:34:05.549052  117164 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336 for IP: 192.168.39.19
	I0210 10:34:05.549086  117164 certs.go:194] generating shared ca certs ...
	I0210 10:34:05.549111  117164 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:05.549293  117164 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 10:34:05.777187  117164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt ...
	I0210 10:34:05.777219  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt: {Name:mk3ef9004c790ad4ebc5c96aaec992b484fcb35e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:05.777386  117164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key ...
	I0210 10:34:05.777398  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key: {Name:mkc5362be0c714464adbf77992fd0e49e25467da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:05.777485  117164 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 10:34:05.962602  117164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt ...
	I0210 10:34:05.962633  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt: {Name:mka588653bd3758ed1d6cecfb0600397dac5a5b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:05.962814  117164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key ...
	I0210 10:34:05.962829  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key: {Name:mkd1eae93f3de4068555794b95fb288932ee0695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:05.962935  117164 certs.go:256] generating profile certs ...
	I0210 10:34:05.962997  117164 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.key
	I0210 10:34:05.963012  117164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt with IP's: []
	I0210 10:34:06.320934  117164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt ...
	I0210 10:34:06.320966  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: {Name:mkbe2debe53c0e0236a667345bdb8b1a78905d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:06.321162  117164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.key ...
	I0210 10:34:06.321176  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.key: {Name:mk537074d97bbceb7286f9032799efa4ed894039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:06.321279  117164 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.key.a1e4266e
	I0210 10:34:06.321302  117164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.crt.a1e4266e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19]
	I0210 10:34:06.479151  117164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.crt.a1e4266e ...
	I0210 10:34:06.479202  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.crt.a1e4266e: {Name:mk547836c7a38695cb7c33612d90edce6f6ac49f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:06.479419  117164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.key.a1e4266e ...
	I0210 10:34:06.479440  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.key.a1e4266e: {Name:mk06fc3aec8d6e77d48d12f88ad3e8cac4dbcaa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:06.479557  117164 certs.go:381] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.crt.a1e4266e -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.crt
	I0210 10:34:06.479664  117164 certs.go:385] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.key.a1e4266e -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.key
	I0210 10:34:06.479723  117164 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.key
	I0210 10:34:06.479744  117164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.crt with IP's: []
	I0210 10:34:06.557632  117164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.crt ...
	I0210 10:34:06.557664  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.crt: {Name:mkdc42842f8558ef1bd6dabbdbb195eceb16a915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:06.557850  117164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.key ...
	I0210 10:34:06.557871  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.key: {Name:mk37b4882a4b8269bedcbc1c5541a47b919103f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:06.558079  117164 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 10:34:06.558128  117164 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 10:34:06.558154  117164 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 10:34:06.558177  117164 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 10:34:06.558735  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 10:34:06.584113  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 10:34:06.605137  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 10:34:06.625856  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 10:34:06.646169  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0210 10:34:06.666752  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 10:34:06.687378  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 10:34:06.708368  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 10:34:06.729501  117164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 10:34:06.750150  117164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 10:34:06.764994  117164 ssh_runner.go:195] Run: openssl version
	I0210 10:34:06.770453  117164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 10:34:06.779942  117164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:34:06.783908  117164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:34:06.783956  117164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:34:06.789228  117164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 10:34:06.798730  117164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 10:34:06.802394  117164 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 10:34:06.802474  117164 kubeadm.go:392] StartCluster: {Name:addons-176336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-176336 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:34:06.802575  117164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 10:34:06.802623  117164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 10:34:06.835957  117164 cri.go:89] found id: ""
	I0210 10:34:06.836037  117164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 10:34:06.845596  117164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 10:34:06.854798  117164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 10:34:06.865704  117164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 10:34:06.865724  117164 kubeadm.go:157] found existing configuration files:
	
	I0210 10:34:06.865773  117164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 10:34:06.874169  117164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 10:34:06.874231  117164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 10:34:06.883129  117164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 10:34:06.891810  117164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 10:34:06.891870  117164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 10:34:06.900577  117164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 10:34:06.909003  117164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 10:34:06.909056  117164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 10:34:06.917691  117164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 10:34:06.925949  117164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 10:34:06.926000  117164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 10:34:06.934828  117164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 10:34:07.078352  117164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 10:34:16.543470  117164 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 10:34:16.543545  117164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 10:34:16.543648  117164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 10:34:16.543883  117164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 10:34:16.544063  117164 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 10:34:16.544172  117164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 10:34:16.545735  117164 out.go:235]   - Generating certificates and keys ...
	I0210 10:34:16.545828  117164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 10:34:16.545904  117164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 10:34:16.545983  117164 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 10:34:16.546055  117164 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 10:34:16.546136  117164 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 10:34:16.546211  117164 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 10:34:16.546289  117164 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 10:34:16.546486  117164 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-176336 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0210 10:34:16.546573  117164 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 10:34:16.546746  117164 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-176336 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I0210 10:34:16.546842  117164 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 10:34:16.546945  117164 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 10:34:16.547018  117164 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 10:34:16.547110  117164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 10:34:16.547212  117164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 10:34:16.547301  117164 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 10:34:16.547379  117164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 10:34:16.547475  117164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 10:34:16.547562  117164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 10:34:16.547689  117164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 10:34:16.547745  117164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 10:34:16.549064  117164 out.go:235]   - Booting up control plane ...
	I0210 10:34:16.549160  117164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 10:34:16.549235  117164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 10:34:16.549315  117164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 10:34:16.549435  117164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 10:34:16.549554  117164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 10:34:16.549611  117164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 10:34:16.549739  117164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 10:34:16.549872  117164 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 10:34:16.549925  117164 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.765169ms
	I0210 10:34:16.550018  117164 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 10:34:16.550114  117164 kubeadm.go:310] [api-check] The API server is healthy after 5.001293176s
	I0210 10:34:16.550245  117164 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 10:34:16.550356  117164 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 10:34:16.550414  117164 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 10:34:16.550587  117164 kubeadm.go:310] [mark-control-plane] Marking the node addons-176336 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 10:34:16.550639  117164 kubeadm.go:310] [bootstrap-token] Using token: to68r5.1uc8y3bjucjj804i
	I0210 10:34:16.552057  117164 out.go:235]   - Configuring RBAC rules ...
	I0210 10:34:16.552167  117164 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 10:34:16.552245  117164 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 10:34:16.552365  117164 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 10:34:16.552501  117164 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 10:34:16.552651  117164 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 10:34:16.552728  117164 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 10:34:16.552845  117164 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 10:34:16.552901  117164 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 10:34:16.552963  117164 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 10:34:16.552972  117164 kubeadm.go:310] 
	I0210 10:34:16.553050  117164 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 10:34:16.553065  117164 kubeadm.go:310] 
	I0210 10:34:16.553174  117164 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 10:34:16.553188  117164 kubeadm.go:310] 
	I0210 10:34:16.553223  117164 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 10:34:16.553310  117164 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 10:34:16.553387  117164 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 10:34:16.553397  117164 kubeadm.go:310] 
	I0210 10:34:16.553474  117164 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 10:34:16.553487  117164 kubeadm.go:310] 
	I0210 10:34:16.553562  117164 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 10:34:16.553576  117164 kubeadm.go:310] 
	I0210 10:34:16.553661  117164 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 10:34:16.553733  117164 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 10:34:16.553798  117164 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 10:34:16.553804  117164 kubeadm.go:310] 
	I0210 10:34:16.553881  117164 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 10:34:16.553945  117164 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 10:34:16.553951  117164 kubeadm.go:310] 
	I0210 10:34:16.554041  117164 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token to68r5.1uc8y3bjucjj804i \
	I0210 10:34:16.554194  117164 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e691840e69ea7d304c7ca12f82f88a69682411454a0b34203921a76731659912 \
	I0210 10:34:16.554326  117164 kubeadm.go:310] 	--control-plane 
	I0210 10:34:16.554348  117164 kubeadm.go:310] 
	I0210 10:34:16.554454  117164 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 10:34:16.554464  117164 kubeadm.go:310] 
	I0210 10:34:16.554577  117164 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token to68r5.1uc8y3bjucjj804i \
	I0210 10:34:16.554681  117164 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e691840e69ea7d304c7ca12f82f88a69682411454a0b34203921a76731659912 
	I0210 10:34:16.554697  117164 cni.go:84] Creating CNI manager for ""
	I0210 10:34:16.554707  117164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 10:34:16.556251  117164 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 10:34:16.557463  117164 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 10:34:16.567668  117164 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 10:34:16.584011  117164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 10:34:16.584147  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:16.584213  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-176336 minikube.k8s.io/updated_at=2025_02_10T10_34_16_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=addons-176336 minikube.k8s.io/primary=true
	I0210 10:34:16.621862  117164 ops.go:34] apiserver oom_adj: -16
	I0210 10:34:16.721917  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:17.222852  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:17.722649  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:18.222057  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:18.722666  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:19.222536  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:19.722313  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:20.222519  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:20.722123  117164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:34:20.814944  117164 kubeadm.go:1113] duration metric: took 4.230847727s to wait for elevateKubeSystemPrivileges
	I0210 10:34:20.814988  117164 kubeadm.go:394] duration metric: took 14.012518595s to StartCluster
	I0210 10:34:20.815011  117164 settings.go:142] acquiring lock: {Name:mk1369a4cca9eaf53282144d4cb555c048db8e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:20.815158  117164 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 10:34:20.815632  117164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:34:20.815871  117164 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 10:34:20.815921  117164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 10:34:20.815957  117164 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0210 10:34:20.816098  117164 addons.go:69] Setting yakd=true in profile "addons-176336"
	I0210 10:34:20.816120  117164 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-176336"
	I0210 10:34:20.816129  117164 config.go:182] Loaded profile config "addons-176336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 10:34:20.816138  117164 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-176336"
	I0210 10:34:20.816141  117164 addons.go:238] Setting addon yakd=true in "addons-176336"
	I0210 10:34:20.816135  117164 addons.go:69] Setting storage-provisioner=true in profile "addons-176336"
	I0210 10:34:20.816160  117164 addons.go:238] Setting addon storage-provisioner=true in "addons-176336"
	I0210 10:34:20.816175  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.816167  117164 addons.go:69] Setting volcano=true in profile "addons-176336"
	I0210 10:34:20.816178  117164 addons.go:69] Setting volumesnapshots=true in profile "addons-176336"
	I0210 10:34:20.816189  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.816195  117164 addons.go:238] Setting addon volumesnapshots=true in "addons-176336"
	I0210 10:34:20.816200  117164 addons.go:69] Setting cloud-spanner=true in profile "addons-176336"
	I0210 10:34:20.816200  117164 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-176336"
	I0210 10:34:20.816212  117164 addons.go:238] Setting addon cloud-spanner=true in "addons-176336"
	I0210 10:34:20.816217  117164 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-176336"
	I0210 10:34:20.816224  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.816232  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.816288  117164 addons.go:69] Setting gcp-auth=true in profile "addons-176336"
	I0210 10:34:20.816350  117164 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-176336"
	I0210 10:34:20.816369  117164 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-176336"
	I0210 10:34:20.816372  117164 mustload.go:65] Loading cluster: addons-176336
	I0210 10:34:20.816400  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.816450  117164 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-176336"
	I0210 10:34:20.816487  117164 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-176336"
	I0210 10:34:20.816499  117164 addons.go:69] Setting default-storageclass=true in profile "addons-176336"
	I0210 10:34:20.816519  117164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-176336"
	I0210 10:34:20.816528  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.816576  117164 config.go:182] Loaded profile config "addons-176336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 10:34:20.816655  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.816682  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.816691  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.816693  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.816707  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.816718  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.816726  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.816735  117164 addons.go:69] Setting ingress-dns=true in profile "addons-176336"
	I0210 10:34:20.816743  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.816748  117164 addons.go:238] Setting addon ingress-dns=true in "addons-176336"
	I0210 10:34:20.816771  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.816108  117164 addons.go:69] Setting inspektor-gadget=true in profile "addons-176336"
	I0210 10:34:20.816192  117164 addons.go:238] Setting addon volcano=true in "addons-176336"
	I0210 10:34:20.817323  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.817347  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.817363  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.816341  117164 addons.go:69] Setting metrics-server=true in profile "addons-176336"
	I0210 10:34:20.817518  117164 addons.go:238] Setting addon metrics-server=true in "addons-176336"
	I0210 10:34:20.817540  117164 addons.go:238] Setting addon inspektor-gadget=true in "addons-176336"
	I0210 10:34:20.817557  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.817581  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.816783  117164 addons.go:69] Setting registry=true in profile "addons-176336"
	I0210 10:34:20.817770  117164 addons.go:238] Setting addon registry=true in "addons-176336"
	I0210 10:34:20.817317  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.817992  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.818163  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.818170  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.817301  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.818221  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.818239  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.818207  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.818270  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.818269  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.818354  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.818309  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.818702  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.818739  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.816727  117164 addons.go:69] Setting ingress=true in profile "addons-176336"
	I0210 10:34:20.819693  117164 addons.go:238] Setting addon ingress=true in "addons-176336"
	I0210 10:34:20.816189  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.818671  117164 out.go:177] * Verifying Kubernetes components...
	I0210 10:34:20.820092  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.820461  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.820499  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.820667  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.820708  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.820956  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.821476  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.821518  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.821581  117164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:34:20.848141  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0210 10:34:20.848162  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34749
	I0210 10:34:20.848444  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.848495  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.850838  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0210 10:34:20.850987  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0210 10:34:20.851118  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34563
	I0210 10:34:20.852539  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.855457  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46831
	I0210 10:34:20.856222  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.856482  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.856597  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.856689  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.856886  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.856910  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.857062  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.857106  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.857120  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.857132  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.857146  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.857195  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.857209  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.857250  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.857271  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.857591  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.857734  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.857752  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.858145  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.858600  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.858646  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.859349  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.859384  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.859350  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.859457  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.859733  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.859778  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.859930  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.859966  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.860153  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.860297  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.860331  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.860471  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0210 10:34:20.860555  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.860585  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.861380  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.863702  117164 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-176336"
	I0210 10:34:20.863746  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.864082  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.864100  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.864359  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.864381  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.864816  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.865359  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.865401  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.875542  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0210 10:34:20.876026  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.876691  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.876713  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.877076  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.877260  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.880004  117164 addons.go:238] Setting addon default-storageclass=true in "addons-176336"
	I0210 10:34:20.880052  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.880439  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.880494  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.883948  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0210 10:34:20.884586  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.885143  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.885165  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.885204  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46415
	I0210 10:34:20.885740  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.891310  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.891823  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.891850  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.892255  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.892861  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.892908  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.893228  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41923
	I0210 10:34:20.893656  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.894162  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.894184  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.894253  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33023
	I0210 10:34:20.894447  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0210 10:34:20.894662  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.894781  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.895260  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.895276  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.895618  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.896311  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.896347  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.896528  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I0210 10:34:20.896714  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.896981  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.897807  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.897836  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.898199  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.898552  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.900109  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.900547  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:20.901503  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0210 10:34:20.902025  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0210 10:34:20.903426  117164 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0210 10:34:20.903446  117164 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0210 10:34:20.903467  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.906799  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.907233  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.907262  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.907528  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.907711  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.907916  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.908104  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.911852  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.911901  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.912419  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.912482  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.912847  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33845
	I0210 10:34:20.912937  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39467
	I0210 10:34:20.913657  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.913676  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.913768  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.914079  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.914265  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.914278  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.914411  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.914424  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.914542  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.914552  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.914692  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.914704  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.915163  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.915168  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.915230  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.915242  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.915460  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.915805  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.915840  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.915942  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.915976  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.916034  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.917557  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.918168  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.919851  117164 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0210 10:34:20.919907  117164 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0210 10:34:20.921246  117164 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 10:34:20.921265  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0210 10:34:20.921284  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.922547  117164 out.go:177]   - Using image docker.io/registry:2.8.3
	I0210 10:34:20.923655  117164 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0210 10:34:20.923674  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0210 10:34:20.923694  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.924663  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.924877  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0210 10:34:20.925397  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.925963  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.925982  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.926046  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.926064  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.926425  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.926475  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.926635  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.926865  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.926905  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.926939  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.927069  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.927405  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.927427  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.927577  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.928732  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.928930  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.928946  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.929083  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.930715  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I0210 10:34:20.930782  117164 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0210 10:34:20.931223  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.931826  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.931848  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.932038  117164 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0210 10:34:20.932064  117164 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0210 10:34:20.932088  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.934168  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45939
	I0210 10:34:20.934595  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.934735  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.935442  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I0210 10:34:20.935507  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.935524  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.935655  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0210 10:34:20.936074  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.936128  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.936217  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.936274  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.936730  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.936765  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.937156  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.937182  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.937201  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.937230  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.937473  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.937541  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.937621  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.937765  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.937785  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.938690  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.938744  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.940033  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.940516  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.940537  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.940779  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.940986  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.941059  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0210 10:34:20.941258  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41941
	I0210 10:34:20.941715  117164 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0210 10:34:20.941882  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.941959  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.942039  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.942812  117164 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 10:34:20.942834  117164 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 10:34:20.942855  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.942818  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.942902  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.942911  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.942919  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.942976  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.943058  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0210 10:34:20.943239  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.943444  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.943573  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.943900  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.944136  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.944157  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.944224  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.944860  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0210 10:34:20.945149  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.945451  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.946531  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.946639  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.947387  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0210 10:34:20.947553  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.947894  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.947987  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.948003  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.948037  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.948095  117164 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0210 10:34:20.948155  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.948212  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:20.948221  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:20.948316  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.948409  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.948534  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.950827  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.950850  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.948789  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.950912  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.948815  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:20.948835  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:20.950966  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:20.950974  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:20.950981  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:20.949135  117164 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 10:34:20.951022  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0210 10:34:20.951037  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.950676  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.951434  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.951717  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0210 10:34:20.952761  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.952979  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0210 10:34:20.953284  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.953346  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:20.953354  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	W0210 10:34:20.953435  117164 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0210 10:34:20.953719  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0210 10:34:20.954903  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0210 10:34:20.955447  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.956079  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.956790  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.956808  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.957391  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.957445  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.957650  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0210 10:34:20.957781  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.957998  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.958375  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.958402  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.958559  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.958749  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.958776  117164 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0210 10:34:20.958995  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.959206  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.959735  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0210 10:34:20.959995  117164 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0210 10:34:20.960014  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0210 10:34:20.960032  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.960569  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.961794  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0210 10:34:20.961992  117164 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0210 10:34:20.963851  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.963912  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0210 10:34:20.964014  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
	I0210 10:34:20.964367  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.964398  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.964502  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.964598  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.965075  117164 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0210 10:34:20.965092  117164 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0210 10:34:20.965110  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.965027  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.965170  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.965188  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.965401  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.965581  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.965640  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.965920  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.966245  117164 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0210 10:34:20.967313  117164 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0210 10:34:20.967337  117164 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0210 10:34:20.967376  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.967838  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.969186  117164 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0210 10:34:20.969983  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.970332  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.970358  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.970499  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.970726  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.970886  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.970943  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.970966  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.970998  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.971044  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.971094  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.971282  117164 out.go:177]   - Using image docker.io/busybox:stable
	I0210 10:34:20.971524  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.971656  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.971790  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.972607  117164 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 10:34:20.972629  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0210 10:34:20.972645  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.977001  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40151
	I0210 10:34:20.977616  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.977801  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40389
	I0210 10:34:20.978581  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.978702  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.978289  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.978784  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.978812  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.978832  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.978985  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.979065  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.979124  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.979237  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.979384  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.979385  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.979540  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.979551  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.979628  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0210 10:34:20.979939  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.980081  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.980312  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.980601  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.980617  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.980933  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.981108  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:20.981524  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.983058  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.983282  117164 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0210 10:34:20.983465  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:20.984398  117164 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0210 10:34:20.984486  117164 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 10:34:20.984509  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0210 10:34:20.984530  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.984628  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0210 10:34:20.985379  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:20.986026  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:20.986043  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:20.986255  117164 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 10:34:20.986571  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:20.987127  117164 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 10:34:20.987251  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:20.987551  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:20.987829  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.987978  117164 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 10:34:20.987995  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 10:34:20.988011  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.988225  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.988295  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.988703  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.988948  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.989111  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.989324  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.989661  117164 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 10:34:20.991119  117164 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 10:34:20.991154  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0210 10:34:20.991174  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:20.991130  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.992013  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.992146  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.992638  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.992831  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.993022  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.993179  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:20.994234  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.994556  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:20.994583  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:20.994858  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:20.995064  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:20.995286  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:20.995501  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:21.003843  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0210 10:34:21.004360  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:21.004836  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:21.004855  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:21.005206  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:21.005423  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:21.007009  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:21.007228  117164 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 10:34:21.007250  117164 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 10:34:21.007271  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:21.009798  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:21.010132  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:21.010175  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:21.010343  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:21.010542  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:21.010673  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:21.010782  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:21.242053  117164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 10:34:21.242099  117164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 10:34:21.307475  117164 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0210 10:34:21.307504  117164 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0210 10:34:21.318631  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 10:34:21.334532  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 10:34:21.349323  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0210 10:34:21.369472  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 10:34:21.388197  117164 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 10:34:21.388218  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0210 10:34:21.412779  117164 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0210 10:34:21.412807  117164 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0210 10:34:21.428952  117164 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0210 10:34:21.428980  117164 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0210 10:34:21.434690  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 10:34:21.448995  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 10:34:21.454488  117164 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0210 10:34:21.454509  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0210 10:34:21.457164  117164 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0210 10:34:21.457183  117164 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0210 10:34:21.468838  117164 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0210 10:34:21.468860  117164 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0210 10:34:21.485799  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 10:34:21.497584  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 10:34:21.599668  117164 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0210 10:34:21.599690  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0210 10:34:21.610669  117164 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 10:34:21.610694  117164 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 10:34:21.613140  117164 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0210 10:34:21.613156  117164 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0210 10:34:21.622480  117164 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0210 10:34:21.622505  117164 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0210 10:34:21.637181  117164 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0210 10:34:21.637206  117164 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0210 10:34:21.676892  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0210 10:34:21.738833  117164 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0210 10:34:21.738865  117164 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0210 10:34:21.763799  117164 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 10:34:21.763830  117164 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 10:34:21.766768  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0210 10:34:21.774473  117164 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0210 10:34:21.774496  117164 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0210 10:34:21.794634  117164 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0210 10:34:21.794656  117164 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0210 10:34:21.873467  117164 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0210 10:34:21.873494  117164 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0210 10:34:21.947724  117164 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0210 10:34:21.947756  117164 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0210 10:34:21.951409  117164 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0210 10:34:21.951440  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0210 10:34:21.960124  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 10:34:22.111733  117164 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0210 10:34:22.111766  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0210 10:34:22.181727  117164 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 10:34:22.181762  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0210 10:34:22.206371  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0210 10:34:22.384829  117164 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0210 10:34:22.384864  117164 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0210 10:34:22.487440  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 10:34:22.632755  117164 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0210 10:34:22.632786  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0210 10:34:23.054752  117164 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.812613903s)
	I0210 10:34:23.054787  117164 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.812696808s)
	I0210 10:34:23.054807  117164 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0210 10:34:23.055727  117164 node_ready.go:35] waiting up to 6m0s for node "addons-176336" to be "Ready" ...
	I0210 10:34:23.057399  117164 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0210 10:34:23.057414  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0210 10:34:23.058665  117164 node_ready.go:49] node "addons-176336" has status "Ready":"True"
	I0210 10:34:23.058684  117164 node_ready.go:38] duration metric: took 2.929916ms for node "addons-176336" to be "Ready" ...
	I0210 10:34:23.058693  117164 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 10:34:23.063598  117164 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:23.404866  117164 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 10:34:23.404899  117164 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0210 10:34:23.560211  117164 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-176336" context rescaled to 1 replicas
	I0210 10:34:23.816438  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 10:34:24.526071  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.207391215s)
	I0210 10:34:24.526144  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:24.526160  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:24.526488  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:24.526511  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:24.526520  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:24.526527  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:24.526777  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:24.526796  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.113143  117164 pod_ready.go:103] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:25.687886  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.353314092s)
	I0210 10:34:25.687935  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.687947  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.687954  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.33859615s)
	I0210 10:34:25.687995  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.688018  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.688016  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.3185191s)
	I0210 10:34:25.688059  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.253340389s)
	I0210 10:34:25.688091  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.688103  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.688117  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.688105  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.688299  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:25.688359  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:25.688366  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.688374  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.688382  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.688396  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.688401  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.688405  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.688413  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.688441  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.688687  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.688703  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.688691  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:25.688730  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.688740  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:25.688746  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.688765  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.688774  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.688781  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.688790  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.688915  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.688930  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.688938  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:25.688944  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:25.689263  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.689282  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:25.689147  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:25.690908  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:25.690927  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:25.690934  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:26.335608  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.886578178s)
	I0210 10:34:26.335659  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:26.335672  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:26.335954  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:26.335972  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:26.335984  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:26.335993  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:26.336207  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:26.336223  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:26.430694  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:26.430717  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:26.431110  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:26.431134  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:27.569601  117164 pod_ready.go:103] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:27.797972  117164 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0210 10:34:27.798027  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:27.801429  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:27.801833  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:27.801862  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:27.802066  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:27.802268  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:27.802462  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:27.802589  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:28.129860  117164 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0210 10:34:28.231583  117164 addons.go:238] Setting addon gcp-auth=true in "addons-176336"
	I0210 10:34:28.231649  117164 host.go:66] Checking if "addons-176336" exists ...
	I0210 10:34:28.231964  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:28.231996  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:28.247243  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39369
	I0210 10:34:28.247709  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:28.248241  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:28.248265  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:28.248623  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:28.249308  117164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:34:28.249348  117164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:34:28.264122  117164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0210 10:34:28.264592  117164 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:34:28.265090  117164 main.go:141] libmachine: Using API Version  1
	I0210 10:34:28.265120  117164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:34:28.265503  117164 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:34:28.265739  117164 main.go:141] libmachine: (addons-176336) Calling .GetState
	I0210 10:34:28.267808  117164 main.go:141] libmachine: (addons-176336) Calling .DriverName
	I0210 10:34:28.268037  117164 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0210 10:34:28.268062  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHHostname
	I0210 10:34:28.270928  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:28.271542  117164 main.go:141] libmachine: (addons-176336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:46:17", ip: ""} in network mk-addons-176336: {Iface:virbr1 ExpiryTime:2025-02-10 11:33:48 +0000 UTC Type:0 Mac:52:54:00:52:46:17 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:addons-176336 Clientid:01:52:54:00:52:46:17}
	I0210 10:34:28.271569  117164 main.go:141] libmachine: (addons-176336) DBG | domain addons-176336 has defined IP address 192.168.39.19 and MAC address 52:54:00:52:46:17 in network mk-addons-176336
	I0210 10:34:28.271716  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHPort
	I0210 10:34:28.271896  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHKeyPath
	I0210 10:34:28.272062  117164 main.go:141] libmachine: (addons-176336) Calling .GetSSHUsername
	I0210 10:34:28.272199  117164 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/addons-176336/id_rsa Username:docker}
	I0210 10:34:28.908728  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.422887699s)
	I0210 10:34:28.908794  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.908807  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.908886  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.411265204s)
	I0210 10:34:28.908939  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.908957  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.908959  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.232032377s)
	I0210 10:34:28.908990  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.909001  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.909021  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.142211421s)
	I0210 10:34:28.909043  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.909056  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.909098  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.948943568s)
	I0210 10:34:28.909122  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.909134  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.909151  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.702742351s)
	I0210 10:34:28.909170  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.909178  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.909287  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.421809515s)
	W0210 10:34:28.909332  117164 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 10:34:28.909386  117164 retry.go:31] will retry after 334.911647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 10:34:28.913216  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.913269  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.913279  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.913292  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.913306  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.913371  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.913405  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.913417  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.913424  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.913435  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.913477  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.913504  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.913515  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.913523  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.913534  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.913585  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.913615  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.913627  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.913635  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.913646  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.913692  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.913723  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.913736  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.913749  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.913761  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.914015  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.914051  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.914063  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.914070  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.914081  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.914190  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.914221  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.914234  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.914289  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.914333  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.914347  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.915267  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.915307  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.915314  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.915529  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.915558  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.915564  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.915575  117164 addons.go:479] Verifying addon metrics-server=true in "addons-176336"
	I0210 10:34:28.916283  117164 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-176336 service yakd-dashboard -n yakd-dashboard
	
	I0210 10:34:28.916338  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.916444  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.916470  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.916489  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.916501  117164 addons.go:479] Verifying addon registry=true in "addons-176336"
	I0210 10:34:28.916506  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:28.916538  117164 addons.go:479] Verifying addon ingress=true in "addons-176336"
	I0210 10:34:28.917728  117164 out.go:177] * Verifying registry addon...
	I0210 10:34:28.917791  117164 out.go:177] * Verifying ingress addon...
	I0210 10:34:28.919946  117164 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0210 10:34:28.919949  117164 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0210 10:34:28.946183  117164 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0210 10:34:28.946209  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:28.946630  117164 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0210 10:34:28.946654  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:28.961758  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:28.961780  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:28.962045  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:28.962101  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:28.962131  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:29.244890  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 10:34:29.430314  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:29.430617  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:29.574761  117164 pod_ready.go:103] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:29.931968  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:29.932445  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:30.330447  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.513956894s)
	I0210 10:34:30.330503  117164 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.062441505s)
	I0210 10:34:30.330519  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:30.330536  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:30.330862  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:30.330906  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:30.330923  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:30.330941  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:30.330954  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:30.331202  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:30.331249  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:30.331257  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:30.331272  117164 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-176336"
	I0210 10:34:30.332768  117164 out.go:177] * Verifying csi-hostpath-driver addon...
	I0210 10:34:30.332767  117164 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 10:34:30.334458  117164 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0210 10:34:30.335069  117164 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0210 10:34:30.335850  117164 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0210 10:34:30.335871  117164 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0210 10:34:30.361493  117164 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0210 10:34:30.361523  117164 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0210 10:34:30.366703  117164 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0210 10:34:30.366724  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:30.439713  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:30.439965  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:30.458119  117164 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 10:34:30.458152  117164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0210 10:34:30.578265  117164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 10:34:30.839653  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:30.940360  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:30.940369  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:30.969771  117164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.724829571s)
	I0210 10:34:30.969840  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:30.969850  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:30.970138  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:30.970164  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:30.970178  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:30.970186  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:30.970180  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:30.970410  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:30.970425  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:31.342379  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:31.431169  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:31.431387  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:31.573563  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:31.573599  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:31.573936  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:31.573985  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:31.574002  117164 main.go:141] libmachine: Making call to close driver server
	I0210 10:34:31.574028  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:31.574084  117164 main.go:141] libmachine: (addons-176336) Calling .Close
	I0210 10:34:31.574331  117164 main.go:141] libmachine: Successfully made call to close driver server
	I0210 10:34:31.574354  117164 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 10:34:31.574370  117164 main.go:141] libmachine: (addons-176336) DBG | Closing plugin on server side
	I0210 10:34:31.575349  117164 addons.go:479] Verifying addon gcp-auth=true in "addons-176336"
	I0210 10:34:31.576903  117164 out.go:177] * Verifying gcp-auth addon...
	I0210 10:34:31.578932  117164 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0210 10:34:31.631038  117164 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0210 10:34:31.631160  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:31.661310  117164 pod_ready.go:103] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:31.838814  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:31.925965  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:31.926039  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:32.082877  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:32.340329  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:32.437467  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:32.440470  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:32.582766  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:32.839705  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:32.924565  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:32.924929  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:33.082366  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:33.338528  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:33.423798  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:33.424935  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:33.583059  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:33.838875  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:33.923584  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:33.923839  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:34.069094  117164 pod_ready.go:103] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:34.081109  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:34.338069  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:34.424392  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:34.424545  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:34.582182  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:34.838062  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:34.924005  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:34.924130  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:35.082642  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:35.339013  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:35.681859  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:35.681927  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:35.681981  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:35.838883  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:35.923677  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:35.923741  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:36.081790  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:36.339582  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:36.423274  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:36.424760  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:36.568912  117164 pod_ready.go:103] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:36.582991  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:36.838473  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:36.923699  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:36.924771  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:37.081976  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:37.338032  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:37.425733  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:37.425820  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:37.582415  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:37.838800  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:37.923493  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:37.924311  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:38.082523  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:38.338762  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:38.423494  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:38.423642  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:38.569979  117164 pod_ready.go:103] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:38.582166  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:38.840495  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:38.924118  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:38.924331  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:39.083049  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:39.338783  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:39.425392  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:39.425528  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:39.582945  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:39.838246  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:39.923621  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:39.923954  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:40.069263  117164 pod_ready.go:93] pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:40.069286  117164 pod_ready.go:82] duration metric: took 17.005662666s for pod "amd-gpu-device-plugin-dh7cx" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.069296  117164 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cjf5q" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.073242  117164 pod_ready.go:93] pod "coredns-668d6bf9bc-cjf5q" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:40.073264  117164 pod_ready.go:82] duration metric: took 3.961322ms for pod "coredns-668d6bf9bc-cjf5q" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.073273  117164 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zd2kh" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.074947  117164 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-zd2kh" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zd2kh" not found
	I0210 10:34:40.074968  117164 pod_ready.go:82] duration metric: took 1.689686ms for pod "coredns-668d6bf9bc-zd2kh" in "kube-system" namespace to be "Ready" ...
	E0210 10:34:40.074977  117164 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-zd2kh" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zd2kh" not found
	I0210 10:34:40.074983  117164 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.078700  117164 pod_ready.go:93] pod "etcd-addons-176336" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:40.078715  117164 pod_ready.go:82] duration metric: took 3.722756ms for pod "etcd-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.078721  117164 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.082103  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:40.084178  117164 pod_ready.go:93] pod "kube-apiserver-addons-176336" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:40.084199  117164 pod_ready.go:82] duration metric: took 5.471632ms for pod "kube-apiserver-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.084211  117164 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.266903  117164 pod_ready.go:93] pod "kube-controller-manager-addons-176336" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:40.266929  117164 pod_ready.go:82] duration metric: took 182.708786ms for pod "kube-controller-manager-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.266939  117164 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gt77j" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.338976  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:40.424148  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:40.424209  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:40.583118  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:40.667594  117164 pod_ready.go:93] pod "kube-proxy-gt77j" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:40.667616  117164 pod_ready.go:82] duration metric: took 400.670953ms for pod "kube-proxy-gt77j" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.667627  117164 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:40.838881  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:40.923904  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:40.924010  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:41.198550  117164 pod_ready.go:93] pod "kube-scheduler-addons-176336" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:41.198586  117164 pod_ready.go:82] duration metric: took 530.950716ms for pod "kube-scheduler-addons-176336" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:41.198604  117164 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:41.198994  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:41.338462  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:41.423392  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:41.424781  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:41.583164  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:41.838892  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:41.923425  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:41.923633  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:42.082688  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:42.339027  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:42.424089  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:42.424128  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:42.581666  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:42.839200  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:42.922728  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:42.923258  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:43.083957  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:43.204972  117164 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:43.339342  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:43.424115  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:43.424256  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:43.582354  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:43.838662  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:43.924692  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:43.924736  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:44.083073  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:44.338789  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:44.424086  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:44.425038  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:44.581985  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:44.838370  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:44.925354  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:44.925581  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:45.083173  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:45.339708  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:45.425687  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:45.425751  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:45.581844  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:45.710853  117164 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:45.838844  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:45.923683  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:45.925260  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:46.082564  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:46.338450  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:46.424364  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:46.424454  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:46.584518  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:46.838436  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:47.310463  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:47.310670  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:47.410652  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:47.410725  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:47.425503  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:47.425606  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:47.583908  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:47.838993  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:47.924277  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:47.924277  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:48.082062  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:48.203838  117164 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:48.339413  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:48.424194  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:48.424300  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:48.582273  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:48.839058  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:48.923874  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:48.924030  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:49.082409  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:49.340227  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:49.424054  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:49.424220  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:49.582633  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:49.844344  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:49.924067  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:49.924739  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:50.082553  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:50.204865  117164 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:50.493118  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:50.493118  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:50.493665  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:50.582489  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:50.837561  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:50.923613  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:50.923901  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:51.082196  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:51.339121  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:51.510554  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:51.510569  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:51.582287  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:51.841434  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:51.925242  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:51.925909  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:52.082417  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:52.338397  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:52.424240  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:52.425182  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:52.582034  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:52.703586  117164 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace has status "Ready":"False"
	I0210 10:34:52.838486  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:52.923710  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:52.923896  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:53.082517  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:53.338034  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:53.424020  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:53.424411  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:53.582819  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:53.838928  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:53.923792  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:53.923849  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:54.083690  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:54.345359  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:54.424377  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:54.424439  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:54.583127  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:54.704079  117164 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace has status "Ready":"True"
	I0210 10:34:54.704103  117164 pod_ready.go:82] duration metric: took 13.505491047s for pod "nvidia-device-plugin-daemonset-t7lzz" in "kube-system" namespace to be "Ready" ...
	I0210 10:34:54.704110  117164 pod_ready.go:39] duration metric: took 31.645406264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 10:34:54.704130  117164 api_server.go:52] waiting for apiserver process to appear ...
	I0210 10:34:54.704189  117164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:34:54.722767  117164 api_server.go:72] duration metric: took 33.906860149s to wait for apiserver process to appear ...
	I0210 10:34:54.722787  117164 api_server.go:88] waiting for apiserver healthz status ...
	I0210 10:34:54.722808  117164 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0210 10:34:54.727114  117164 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0210 10:34:54.727996  117164 api_server.go:141] control plane version: v1.32.1
	I0210 10:34:54.728021  117164 api_server.go:131] duration metric: took 5.227194ms to wait for apiserver health ...
	I0210 10:34:54.728031  117164 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 10:34:54.731623  117164 system_pods.go:59] 18 kube-system pods found
	I0210 10:34:54.731654  117164 system_pods.go:61] "amd-gpu-device-plugin-dh7cx" [f72c0172-3942-4bd6-917c-f3b7e3fd7607] Running
	I0210 10:34:54.731659  117164 system_pods.go:61] "coredns-668d6bf9bc-cjf5q" [9f9868e3-56a5-44c2-8114-959f0fc9e24f] Running
	I0210 10:34:54.731665  117164 system_pods.go:61] "csi-hostpath-attacher-0" [84664bc1-fc4d-4ea8-a71a-5933b7e45ceb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0210 10:34:54.731672  117164 system_pods.go:61] "csi-hostpath-resizer-0" [936bee80-ad8e-4e5c-b8c0-293f2fea5d8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0210 10:34:54.731684  117164 system_pods.go:61] "csi-hostpathplugin-9d9mf" [838867ed-d927-4b16-bc48-a753cebc7ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 10:34:54.731690  117164 system_pods.go:61] "etcd-addons-176336" [06c8c072-9fb3-402d-b51d-a474a8f22e9a] Running
	I0210 10:34:54.731697  117164 system_pods.go:61] "kube-apiserver-addons-176336" [90f1eb13-3dda-4095-9c2b-a0930912829a] Running
	I0210 10:34:54.731701  117164 system_pods.go:61] "kube-controller-manager-addons-176336" [ff3b3458-4632-483b-b69b-8b87bcc50731] Running
	I0210 10:34:54.731710  117164 system_pods.go:61] "kube-ingress-dns-minikube" [bc9d9636-0d0b-4f4c-815f-0c27eb802a57] Running
	I0210 10:34:54.731717  117164 system_pods.go:61] "kube-proxy-gt77j" [7c90dafe-4fae-4761-b3d0-99cc5a66a0c3] Running
	I0210 10:34:54.731721  117164 system_pods.go:61] "kube-scheduler-addons-176336" [e90cd1c1-664c-44bf-aa07-5ad76a0940b5] Running
	I0210 10:34:54.731731  117164 system_pods.go:61] "metrics-server-7fbb699795-8zxm2" [dadc63e3-cb8c-4654-a037-f61e6fd19b18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 10:34:54.731741  117164 system_pods.go:61] "nvidia-device-plugin-daemonset-t7lzz" [7a0f2255-4f39-406e-a4d4-d3339799a3cf] Running
	I0210 10:34:54.731754  117164 system_pods.go:61] "registry-6c88467877-h788n" [de15c872-5255-4828-89b5-5881bd20be96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0210 10:34:54.731766  117164 system_pods.go:61] "registry-proxy-pz2jr" [c39641af-4408-48ac-ad63-707a945defdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 10:34:54.731783  117164 system_pods.go:61] "snapshot-controller-68b874b76f-f26sz" [7a51c1e1-4c03-4bc1-abc3-8806f8820eeb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 10:34:54.731798  117164 system_pods.go:61] "snapshot-controller-68b874b76f-g8jdz" [bef3f7cb-ce84-4a94-ac84-29587b986a05] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 10:34:54.731809  117164 system_pods.go:61] "storage-provisioner" [cc9d0b36-b428-4c7e-b022-c4db04559fae] Running
	I0210 10:34:54.731819  117164 system_pods.go:74] duration metric: took 3.782441ms to wait for pod list to return data ...
	I0210 10:34:54.731829  117164 default_sa.go:34] waiting for default service account to be created ...
	I0210 10:34:54.734174  117164 default_sa.go:45] found service account: "default"
	I0210 10:34:54.734192  117164 default_sa.go:55] duration metric: took 2.355357ms for default service account to be created ...
	I0210 10:34:54.734200  117164 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 10:34:54.736985  117164 system_pods.go:86] 18 kube-system pods found
	I0210 10:34:54.737007  117164 system_pods.go:89] "amd-gpu-device-plugin-dh7cx" [f72c0172-3942-4bd6-917c-f3b7e3fd7607] Running
	I0210 10:34:54.737012  117164 system_pods.go:89] "coredns-668d6bf9bc-cjf5q" [9f9868e3-56a5-44c2-8114-959f0fc9e24f] Running
	I0210 10:34:54.737019  117164 system_pods.go:89] "csi-hostpath-attacher-0" [84664bc1-fc4d-4ea8-a71a-5933b7e45ceb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0210 10:34:54.737028  117164 system_pods.go:89] "csi-hostpath-resizer-0" [936bee80-ad8e-4e5c-b8c0-293f2fea5d8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0210 10:34:54.737041  117164 system_pods.go:89] "csi-hostpathplugin-9d9mf" [838867ed-d927-4b16-bc48-a753cebc7ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 10:34:54.737050  117164 system_pods.go:89] "etcd-addons-176336" [06c8c072-9fb3-402d-b51d-a474a8f22e9a] Running
	I0210 10:34:54.737056  117164 system_pods.go:89] "kube-apiserver-addons-176336" [90f1eb13-3dda-4095-9c2b-a0930912829a] Running
	I0210 10:34:54.737062  117164 system_pods.go:89] "kube-controller-manager-addons-176336" [ff3b3458-4632-483b-b69b-8b87bcc50731] Running
	I0210 10:34:54.737072  117164 system_pods.go:89] "kube-ingress-dns-minikube" [bc9d9636-0d0b-4f4c-815f-0c27eb802a57] Running
	I0210 10:34:54.737084  117164 system_pods.go:89] "kube-proxy-gt77j" [7c90dafe-4fae-4761-b3d0-99cc5a66a0c3] Running
	I0210 10:34:54.737088  117164 system_pods.go:89] "kube-scheduler-addons-176336" [e90cd1c1-664c-44bf-aa07-5ad76a0940b5] Running
	I0210 10:34:54.737099  117164 system_pods.go:89] "metrics-server-7fbb699795-8zxm2" [dadc63e3-cb8c-4654-a037-f61e6fd19b18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 10:34:54.737107  117164 system_pods.go:89] "nvidia-device-plugin-daemonset-t7lzz" [7a0f2255-4f39-406e-a4d4-d3339799a3cf] Running
	I0210 10:34:54.737114  117164 system_pods.go:89] "registry-6c88467877-h788n" [de15c872-5255-4828-89b5-5881bd20be96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0210 10:34:54.737122  117164 system_pods.go:89] "registry-proxy-pz2jr" [c39641af-4408-48ac-ad63-707a945defdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 10:34:54.737129  117164 system_pods.go:89] "snapshot-controller-68b874b76f-f26sz" [7a51c1e1-4c03-4bc1-abc3-8806f8820eeb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 10:34:54.737140  117164 system_pods.go:89] "snapshot-controller-68b874b76f-g8jdz" [bef3f7cb-ce84-4a94-ac84-29587b986a05] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 10:34:54.737150  117164 system_pods.go:89] "storage-provisioner" [cc9d0b36-b428-4c7e-b022-c4db04559fae] Running
	I0210 10:34:54.737162  117164 system_pods.go:126] duration metric: took 2.956043ms to wait for k8s-apps to be running ...
	I0210 10:34:54.737173  117164 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 10:34:54.737217  117164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:34:54.751000  117164 system_svc.go:56] duration metric: took 13.821956ms WaitForService to wait for kubelet
	I0210 10:34:54.751023  117164 kubeadm.go:582] duration metric: took 33.935119157s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 10:34:54.751051  117164 node_conditions.go:102] verifying NodePressure condition ...
	I0210 10:34:54.752692  117164 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 10:34:54.752744  117164 node_conditions.go:123] node cpu capacity is 2
	I0210 10:34:54.752763  117164 node_conditions.go:105] duration metric: took 1.70285ms to run NodePressure ...
	I0210 10:34:54.752782  117164 start.go:241] waiting for startup goroutines ...
	I0210 10:34:54.838428  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:54.924024  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:54.924069  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:55.081624  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:55.339499  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:55.423413  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:55.423772  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:55.582894  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:55.838239  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:55.923333  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:55.924252  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:56.082467  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:56.338926  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:56.426611  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:56.428614  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:56.582564  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:56.839153  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:56.924335  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:56.924567  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:57.082388  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:57.338195  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:57.423402  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:57.423582  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:57.582690  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:58.128338  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:58.128899  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:58.128936  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:58.129062  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:58.338022  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:58.424061  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:58.424401  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:58.582483  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:58.838508  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:58.926218  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:58.926245  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:59.082352  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:59.338552  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:59.423268  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:34:59.423304  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:59.582784  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:34:59.839078  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:34:59.924465  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:34:59.924508  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:00.082302  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:00.338218  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:00.422889  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:00.423044  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:00.583316  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:00.838196  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:00.924228  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:00.924270  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:01.082549  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:01.339060  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:01.424996  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:01.425004  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:01.582355  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:01.838848  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:01.924661  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:01.925030  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:02.081837  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:02.339544  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:02.423764  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:02.423845  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:02.582979  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:02.839671  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:02.924772  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:02.924972  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:03.082341  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:03.338551  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:03.432202  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:03.432394  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:03.582621  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:03.838905  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:03.924226  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:03.924255  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:04.082612  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:04.339609  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:04.440057  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:04.440203  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:04.582326  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:04.844246  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:05.293939  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:05.295223  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:05.295878  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:05.339994  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:05.441871  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:05.442065  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:05.583664  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:05.838591  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:05.923833  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:05.923899  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:06.082315  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:06.338729  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:06.434290  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:06.434360  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:06.582262  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:06.838175  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:06.922806  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:06.923840  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:07.084063  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:07.338175  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:07.424913  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:07.425008  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:07.582711  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:07.839137  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:07.928631  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:07.928849  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:08.083098  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:08.338294  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:08.423105  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:08.423281  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:08.582443  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:08.838777  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:08.924361  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:08.924505  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:09.081854  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:09.339132  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:09.424709  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:09.424806  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:09.582766  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:09.839211  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:09.923266  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:09.924614  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:10.082886  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:10.339226  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:10.423173  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:10.423317  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:10.582649  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:10.839151  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:10.923910  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:10.924118  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:11.082202  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:11.338333  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:11.423320  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:11.423370  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:11.583043  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:11.995024  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:11.995285  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:11.995335  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:12.081866  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:12.339531  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:12.423376  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:12.423433  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:12.582736  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:12.838695  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:12.923724  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:12.923779  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:13.082652  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:13.339740  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:13.423794  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:13.424653  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:13.582899  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:13.838492  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:13.924029  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:13.924089  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:14.082827  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:14.339451  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:14.423644  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:14.423929  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:14.581747  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:14.839040  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:14.924332  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:14.925201  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:15.095944  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:15.337833  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:15.424792  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:15.424921  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:15.582082  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:15.837926  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:15.923918  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:15.924043  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:16.379027  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:16.379367  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:16.424592  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:16.424742  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:16.582280  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:16.838430  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:16.923307  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:16.923380  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:17.083161  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:17.339471  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:17.424187  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:17.424266  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:17.582469  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:17.839675  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:17.923374  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:17.923619  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:18.082918  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:18.339107  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:18.440622  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:18.440628  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:18.584000  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:18.838500  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:18.923259  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:18.923374  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:19.082769  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:19.338993  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:19.424168  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:19.424763  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:19.582297  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:19.838681  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:19.924366  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:19.924529  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:20.082001  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:20.338030  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:20.424644  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:20.424678  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:20.582445  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:20.839300  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:20.925100  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:20.925993  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:21.082435  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:21.340072  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:21.423974  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:21.424508  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 10:35:21.583746  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:21.840568  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:21.924344  117164 kapi.go:107] duration metric: took 53.004395635s to wait for kubernetes.io/minikube-addons=registry ...
	I0210 10:35:21.924600  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:22.082469  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:22.340157  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:22.425093  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:22.584045  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:22.838219  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:22.924132  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:23.082121  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:23.338434  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:23.422989  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:23.581619  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:23.839538  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:23.924273  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:24.082989  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:24.338811  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:24.423367  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:24.582161  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:24.839052  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:24.939144  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:25.123292  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:25.338143  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:25.423843  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:25.582748  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:25.839673  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:25.940005  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:26.082372  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:26.339635  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:26.429173  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:26.581930  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:26.839509  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:26.922785  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:27.082524  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:27.338977  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:27.423377  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:27.582057  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:27.838220  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:27.923917  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:28.081769  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:28.339674  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:28.440456  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:28.582230  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:28.841412  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:28.923135  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:29.082227  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:29.339169  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:29.424082  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:29.581556  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:29.838900  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:29.923475  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:30.082373  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:30.339327  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:30.423347  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:30.582190  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:30.838119  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:30.929192  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:31.083036  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:31.339175  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:31.424223  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:31.582708  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:31.838961  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:31.925074  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:32.081958  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:32.339080  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:32.423700  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:32.583038  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:32.838013  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:32.923721  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:33.085014  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:33.338542  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:33.438931  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:33.589113  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:33.838119  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:33.924037  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:34.082304  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:34.338323  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:34.438868  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:34.873881  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:34.881998  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:34.959497  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:35.081566  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:35.338978  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:35.424570  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:35.582381  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:35.838313  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:35.924367  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:36.082781  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:36.338868  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:36.425687  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:36.582664  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:36.839833  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:36.924706  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:37.085312  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:37.340057  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:37.424903  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:37.583268  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:37.838083  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:37.924939  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:38.081392  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:38.338428  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:38.423125  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:38.581935  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:38.839025  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:38.924070  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:39.082395  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:39.339067  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:39.444259  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:39.582375  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:39.838315  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:39.923227  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:40.081731  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:40.338599  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:40.423215  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:40.581513  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:40.851675  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:40.943572  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:41.081794  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:41.338877  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:41.423757  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:41.582646  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:41.841831  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:41.924155  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:42.082765  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:42.340022  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:42.440450  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:42.584097  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:42.839284  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:42.924807  117164 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 10:35:43.082513  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:43.338868  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:43.426731  117164 kapi.go:107] duration metric: took 1m14.506776654s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0210 10:35:43.588321  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:43.838505  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:44.085926  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:44.339885  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:44.582461  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:44.838808  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:45.082938  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:45.339164  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:45.582864  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:45.839026  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:46.082555  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:46.338734  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:46.583854  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:46.839086  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:47.081453  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:47.339659  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:47.583245  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 10:35:47.838090  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:48.084602  117164 kapi.go:107] duration metric: took 1m16.505667148s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0210 10:35:48.085945  117164 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-176336 cluster.
	I0210 10:35:48.087030  117164 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0210 10:35:48.087992  117164 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0210 10:35:48.338881  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:48.838933  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:49.338817  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:49.839012  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:50.338270  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:50.839016  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:51.339100  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:51.839753  117164 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 10:35:52.338778  117164 kapi.go:107] duration metric: took 1m22.003699732s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0210 10:35:52.340495  117164 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0210 10:35:52.341545  117164 addons.go:514] duration metric: took 1m31.525594505s for enable addons: enabled=[ingress-dns cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0210 10:35:52.341609  117164 start.go:246] waiting for cluster config update ...
	I0210 10:35:52.341634  117164 start.go:255] writing updated cluster config ...
	I0210 10:35:52.341958  117164 ssh_runner.go:195] Run: rm -f paused
	I0210 10:35:52.393441  117164 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 10:35:52.395023  117164 out.go:177] * Done! kubectl is now configured to use "addons-176336" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.209850476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183936209820335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da6ac2ed-7d34-465d-9ae5-9da923b39604 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.210638207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8a07146-82f1-4257-973f-74d0d464488e name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.210706109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8a07146-82f1-4257-973f-74d0d464488e name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.211061647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a8c2f753ff081df4814c0cdac41056f532f094b8daf5d25f65b46e53915391,PodSandboxId:d7ad2f942453b4d53228a330153e4d5ffc64b01cbbdad847de6993c7457e8253,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739183796401493411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e0a91e-67c4-4ecb-b465-8d8abb30521e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d135cd2cd3e36afd65ef0503ed205eb4c7fab7d444a3f6a2bd9d2c47566a3,PodSandboxId:6ab63b6eb3e333937a7981cc7a9c35feb3ddd7914747ffe84218c4e33611e409,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739183756270567205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e0f6684-3b28-497f-b99e-d8ce49ab2130,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63df5b4f8e1d0028b2e0c72fe1a5e4f090d9d66531ee0335994d09bd5176d2d,PodSandboxId:3a3c9a28f0c8522947907b25204af22a40337b18911140ea801516121b29d841,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739183741997458095,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-8h9kx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0114fd7-74ae-4bef-895a-13dcf0a3cc25,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:50e4352e597d1865bb776c9418d9f28b01f07094e390f9f0a597aeee6255f302,PodSandboxId:5e7b5c1adbfcd9a0fc3d064c24d6c2c3f0445c02b7c9a473428ba4e26c5a8ff8,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739183728661789781,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2cfdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5383f51f-d8d0-4681-961a-6f8f903ee393,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16713f6f6904f4a64f5a7a6fe6fab53dd11e4d8cd2c1e4de96d2cb4ee876c495,PodSandboxId:0e173b18a40c30181d87fb69008b1994ffdcf0fd5db7bd88dafbd43746dd4ee1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739183728305946122,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqthz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea86543a-49a0-4039-a787-00c0b2c5064d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82f9fd725b70c11500bfcb221aee1941ab4827db152fd722948fccb33b444c,PodSandboxId:4972dfe079086a5a6f70d113b727427f7dd1525594c1b789e5b8f8db33bbe85d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739183679497268955,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-dh7cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c0172-3942-4bd6-917c-f3b7e3fd7607,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b2fb9a4e51676810a997907e0e59812d0417c2270faa4b95aaadb703165672,PodSandboxId:8728578287e1ffe2558e2e1a262b0b478602b4bd1024dd828f3bca1ef860290d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739183676613697350,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc9d9636-0d0b-4f4c-815f-0c27eb802a57,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89af92114d5787de109044edb4467f2cb0f760283fa70e5be18231f605239983,PodSandboxId:34dcd002f7fe97f478a3367d72e3438063a77e0126d9c03031a63fb1170c23a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739183667487725176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc9d0b36-b428-4c7e-b022-c4db04559fae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a002e79df38ecadd0017a267050049e8ca2995bd5efabff63b35ef6311d8b2,PodSandboxId:545f703a536419fe3943bf5edc70358b9b4e01dc42f06878a61599d7480e28d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739183666017358318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cjf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f9868e3-56a5-44c2-8114-959f0fc9e24f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a0734289f10df2e83852054155b17f519275f69009355e9e2a5
2a04d07a379,PodSandboxId:0429495111c8e977796da8809c74aab3de45b8aadafd8e74bdde1414d9279a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739183663139471995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gt77j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c90dafe-4fae-4761-b3d0-99cc5a66a0c3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363d7872b90aa24c658b18d6deb8f08946dfec19aa2b2b7c0184a3f7240fbb77,PodSandboxId:416fcadf
741d0ac70a73ad49d421a488f48e43d33de4cf818e56d50278965bbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739183650658374137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de62839201adadd1fa3b5bbed8f9b81,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752f276c4ec307e4d54d8637eb15addb017467aace1b37a0e21b4126c17e4b4a,PodSandboxId:27962e04df33f8e74eb413312deef79f3dfbd7d42614d2a2bd0de64
d07cf19dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739183650621301679,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b9ce3ef3a7488c17deb45773a45af9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd3ae854d6c175eae404b039e505249034390880d50d3d5f9aefa579373b057,PodSandboxId:8ecad3e63714ef944dd5258ac692366bd9534cce681bf3b349b3d1512d7c742a,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739183650584577967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6abbb095762c1f74d0a65d58ff8d9b60,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b4c88f533dd651b0efdbe4e4490d2c720fab084d142719531bfd95dd2e1bdf,PodSandboxId:ed6aee168a58a36fcdf73aa9a15461faa0318ea252ad29e5b0d0aa8be081a8d
c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739183650535310364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992088461af9c30230a13ab52cb32e6e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8a07146-82f1-4257-973f-74d0d464488e name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.252077541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff5f8f0a-e6e6-430b-88b0-9ea82e11e2ca name=/runtime.v1.RuntimeService/Version
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.252203140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff5f8f0a-e6e6-430b-88b0-9ea82e11e2ca name=/runtime.v1.RuntimeService/Version
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.253309477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e66f181-4bbc-4438-b50f-e560d7f563cc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.254422940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183936254398297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e66f181-4bbc-4438-b50f-e560d7f563cc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.254958296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e008a3f3-3fb7-4ec2-a5e3-98aea8abab03 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.255017974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e008a3f3-3fb7-4ec2-a5e3-98aea8abab03 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.255422295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a8c2f753ff081df4814c0cdac41056f532f094b8daf5d25f65b46e53915391,PodSandboxId:d7ad2f942453b4d53228a330153e4d5ffc64b01cbbdad847de6993c7457e8253,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739183796401493411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e0a91e-67c4-4ecb-b465-8d8abb30521e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d135cd2cd3e36afd65ef0503ed205eb4c7fab7d444a3f6a2bd9d2c47566a3,PodSandboxId:6ab63b6eb3e333937a7981cc7a9c35feb3ddd7914747ffe84218c4e33611e409,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739183756270567205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e0f6684-3b28-497f-b99e-d8ce49ab2130,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63df5b4f8e1d0028b2e0c72fe1a5e4f090d9d66531ee0335994d09bd5176d2d,PodSandboxId:3a3c9a28f0c8522947907b25204af22a40337b18911140ea801516121b29d841,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739183741997458095,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-8h9kx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0114fd7-74ae-4bef-895a-13dcf0a3cc25,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:50e4352e597d1865bb776c9418d9f28b01f07094e390f9f0a597aeee6255f302,PodSandboxId:5e7b5c1adbfcd9a0fc3d064c24d6c2c3f0445c02b7c9a473428ba4e26c5a8ff8,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739183728661789781,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2cfdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5383f51f-d8d0-4681-961a-6f8f903ee393,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16713f6f6904f4a64f5a7a6fe6fab53dd11e4d8cd2c1e4de96d2cb4ee876c495,PodSandboxId:0e173b18a40c30181d87fb69008b1994ffdcf0fd5db7bd88dafbd43746dd4ee1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739183728305946122,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqthz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea86543a-49a0-4039-a787-00c0b2c5064d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82f9fd725b70c11500bfcb221aee1941ab4827db152fd722948fccb33b444c,PodSandboxId:4972dfe079086a5a6f70d113b727427f7dd1525594c1b789e5b8f8db33bbe85d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739183679497268955,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-dh7cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c0172-3942-4bd6-917c-f3b7e3fd7607,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b2fb9a4e51676810a997907e0e59812d0417c2270faa4b95aaadb703165672,PodSandboxId:8728578287e1ffe2558e2e1a262b0b478602b4bd1024dd828f3bca1ef860290d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739183676613697350,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc9d9636-0d0b-4f4c-815f-0c27eb802a57,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89af92114d5787de109044edb4467f2cb0f760283fa70e5be18231f605239983,PodSandboxId:34dcd002f7fe97f478a3367d72e3438063a77e0126d9c03031a63fb1170c23a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739183667487725176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc9d0b36-b428-4c7e-b022-c4db04559fae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a002e79df38ecadd0017a267050049e8ca2995bd5efabff63b35ef6311d8b2,PodSandboxId:545f703a536419fe3943bf5edc70358b9b4e01dc42f06878a61599d7480e28d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739183666017358318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cjf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f9868e3-56a5-44c2-8114-959f0fc9e24f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a0734289f10df2e83852054155b17f519275f69009355e9e2a5
2a04d07a379,PodSandboxId:0429495111c8e977796da8809c74aab3de45b8aadafd8e74bdde1414d9279a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739183663139471995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gt77j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c90dafe-4fae-4761-b3d0-99cc5a66a0c3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363d7872b90aa24c658b18d6deb8f08946dfec19aa2b2b7c0184a3f7240fbb77,PodSandboxId:416fcadf
741d0ac70a73ad49d421a488f48e43d33de4cf818e56d50278965bbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739183650658374137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de62839201adadd1fa3b5bbed8f9b81,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752f276c4ec307e4d54d8637eb15addb017467aace1b37a0e21b4126c17e4b4a,PodSandboxId:27962e04df33f8e74eb413312deef79f3dfbd7d42614d2a2bd0de64
d07cf19dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739183650621301679,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b9ce3ef3a7488c17deb45773a45af9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd3ae854d6c175eae404b039e505249034390880d50d3d5f9aefa579373b057,PodSandboxId:8ecad3e63714ef944dd5258ac692366bd9534cce681bf3b349b3d1512d7c742a,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739183650584577967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6abbb095762c1f74d0a65d58ff8d9b60,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b4c88f533dd651b0efdbe4e4490d2c720fab084d142719531bfd95dd2e1bdf,PodSandboxId:ed6aee168a58a36fcdf73aa9a15461faa0318ea252ad29e5b0d0aa8be081a8d
c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739183650535310364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992088461af9c30230a13ab52cb32e6e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e008a3f3-3fb7-4ec2-a5e3-98aea8abab03 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.288502183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40dd0934-6a41-4071-9e55-e7ab941dc07d name=/runtime.v1.RuntimeService/Version
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.288596010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40dd0934-6a41-4071-9e55-e7ab941dc07d name=/runtime.v1.RuntimeService/Version
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.289612678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d41f8ec-2580-45ac-a20e-3e9e43c96663 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.291021497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183936290993073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d41f8ec-2580-45ac-a20e-3e9e43c96663 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.291498791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e0a9658-80dd-47dd-b378-592d180d2530 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.291558556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e0a9658-80dd-47dd-b378-592d180d2530 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.291835768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a8c2f753ff081df4814c0cdac41056f532f094b8daf5d25f65b46e53915391,PodSandboxId:d7ad2f942453b4d53228a330153e4d5ffc64b01cbbdad847de6993c7457e8253,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739183796401493411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e0a91e-67c4-4ecb-b465-8d8abb30521e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d135cd2cd3e36afd65ef0503ed205eb4c7fab7d444a3f6a2bd9d2c47566a3,PodSandboxId:6ab63b6eb3e333937a7981cc7a9c35feb3ddd7914747ffe84218c4e33611e409,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739183756270567205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e0f6684-3b28-497f-b99e-d8ce49ab2130,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63df5b4f8e1d0028b2e0c72fe1a5e4f090d9d66531ee0335994d09bd5176d2d,PodSandboxId:3a3c9a28f0c8522947907b25204af22a40337b18911140ea801516121b29d841,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739183741997458095,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-8h9kx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0114fd7-74ae-4bef-895a-13dcf0a3cc25,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:50e4352e597d1865bb776c9418d9f28b01f07094e390f9f0a597aeee6255f302,PodSandboxId:5e7b5c1adbfcd9a0fc3d064c24d6c2c3f0445c02b7c9a473428ba4e26c5a8ff8,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739183728661789781,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2cfdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5383f51f-d8d0-4681-961a-6f8f903ee393,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16713f6f6904f4a64f5a7a6fe6fab53dd11e4d8cd2c1e4de96d2cb4ee876c495,PodSandboxId:0e173b18a40c30181d87fb69008b1994ffdcf0fd5db7bd88dafbd43746dd4ee1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739183728305946122,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqthz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea86543a-49a0-4039-a787-00c0b2c5064d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82f9fd725b70c11500bfcb221aee1941ab4827db152fd722948fccb33b444c,PodSandboxId:4972dfe079086a5a6f70d113b727427f7dd1525594c1b789e5b8f8db33bbe85d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739183679497268955,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-dh7cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c0172-3942-4bd6-917c-f3b7e3fd7607,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b2fb9a4e51676810a997907e0e59812d0417c2270faa4b95aaadb703165672,PodSandboxId:8728578287e1ffe2558e2e1a262b0b478602b4bd1024dd828f3bca1ef860290d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739183676613697350,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc9d9636-0d0b-4f4c-815f-0c27eb802a57,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89af92114d5787de109044edb4467f2cb0f760283fa70e5be18231f605239983,PodSandboxId:34dcd002f7fe97f478a3367d72e3438063a77e0126d9c03031a63fb1170c23a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739183667487725176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc9d0b36-b428-4c7e-b022-c4db04559fae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a002e79df38ecadd0017a267050049e8ca2995bd5efabff63b35ef6311d8b2,PodSandboxId:545f703a536419fe3943bf5edc70358b9b4e01dc42f06878a61599d7480e28d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739183666017358318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cjf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f9868e3-56a5-44c2-8114-959f0fc9e24f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a0734289f10df2e83852054155b17f519275f69009355e9e2a5
2a04d07a379,PodSandboxId:0429495111c8e977796da8809c74aab3de45b8aadafd8e74bdde1414d9279a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739183663139471995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gt77j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c90dafe-4fae-4761-b3d0-99cc5a66a0c3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363d7872b90aa24c658b18d6deb8f08946dfec19aa2b2b7c0184a3f7240fbb77,PodSandboxId:416fcadf
741d0ac70a73ad49d421a488f48e43d33de4cf818e56d50278965bbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739183650658374137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de62839201adadd1fa3b5bbed8f9b81,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752f276c4ec307e4d54d8637eb15addb017467aace1b37a0e21b4126c17e4b4a,PodSandboxId:27962e04df33f8e74eb413312deef79f3dfbd7d42614d2a2bd0de64
d07cf19dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739183650621301679,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b9ce3ef3a7488c17deb45773a45af9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd3ae854d6c175eae404b039e505249034390880d50d3d5f9aefa579373b057,PodSandboxId:8ecad3e63714ef944dd5258ac692366bd9534cce681bf3b349b3d1512d7c742a,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739183650584577967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6abbb095762c1f74d0a65d58ff8d9b60,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b4c88f533dd651b0efdbe4e4490d2c720fab084d142719531bfd95dd2e1bdf,PodSandboxId:ed6aee168a58a36fcdf73aa9a15461faa0318ea252ad29e5b0d0aa8be081a8d
c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739183650535310364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992088461af9c30230a13ab52cb32e6e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e0a9658-80dd-47dd-b378-592d180d2530 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.322491309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cffb2c5e-e3fe-47c7-bc1e-cfe6269b51b2 name=/runtime.v1.RuntimeService/Version
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.322588598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cffb2c5e-e3fe-47c7-bc1e-cfe6269b51b2 name=/runtime.v1.RuntimeService/Version
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.323552265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80675764-b44f-4041-a879-d2cdb9065e1d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.324845272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183936324630860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80675764-b44f-4041-a879-d2cdb9065e1d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.325381736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5c2752b-8c92-461d-a04e-b2e948ce70a0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.325442895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5c2752b-8c92-461d-a04e-b2e948ce70a0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 10:38:56 addons-176336 crio[667]: time="2025-02-10 10:38:56.325723461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a8c2f753ff081df4814c0cdac41056f532f094b8daf5d25f65b46e53915391,PodSandboxId:d7ad2f942453b4d53228a330153e4d5ffc64b01cbbdad847de6993c7457e8253,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739183796401493411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e0a91e-67c4-4ecb-b465-8d8abb30521e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f9d135cd2cd3e36afd65ef0503ed205eb4c7fab7d444a3f6a2bd9d2c47566a3,PodSandboxId:6ab63b6eb3e333937a7981cc7a9c35feb3ddd7914747ffe84218c4e33611e409,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739183756270567205,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e0f6684-3b28-497f-b99e-d8ce49ab2130,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63df5b4f8e1d0028b2e0c72fe1a5e4f090d9d66531ee0335994d09bd5176d2d,PodSandboxId:3a3c9a28f0c8522947907b25204af22a40337b18911140ea801516121b29d841,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739183741997458095,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-8h9kx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0114fd7-74ae-4bef-895a-13dcf0a3cc25,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:50e4352e597d1865bb776c9418d9f28b01f07094e390f9f0a597aeee6255f302,PodSandboxId:5e7b5c1adbfcd9a0fc3d064c24d6c2c3f0445c02b7c9a473428ba4e26c5a8ff8,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739183728661789781,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2cfdd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5383f51f-d8d0-4681-961a-6f8f903ee393,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16713f6f6904f4a64f5a7a6fe6fab53dd11e4d8cd2c1e4de96d2cb4ee876c495,PodSandboxId:0e173b18a40c30181d87fb69008b1994ffdcf0fd5db7bd88dafbd43746dd4ee1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739183728305946122,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqthz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea86543a-49a0-4039-a787-00c0b2c5064d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c82f9fd725b70c11500bfcb221aee1941ab4827db152fd722948fccb33b444c,PodSandboxId:4972dfe079086a5a6f70d113b727427f7dd1525594c1b789e5b8f8db33bbe85d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739183679497268955,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-dh7cx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c0172-3942-4bd6-917c-f3b7e3fd7607,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b2fb9a4e51676810a997907e0e59812d0417c2270faa4b95aaadb703165672,PodSandboxId:8728578287e1ffe2558e2e1a262b0b478602b4bd1024dd828f3bca1ef860290d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739183676613697350,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc9d9636-0d0b-4f4c-815f-0c27eb802a57,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89af92114d5787de109044edb4467f2cb0f760283fa70e5be18231f605239983,PodSandboxId:34dcd002f7fe97f478a3367d72e3438063a77e0126d9c03031a63fb1170c23a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739183667487725176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc9d0b36-b428-4c7e-b022-c4db04559fae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a002e79df38ecadd0017a267050049e8ca2995bd5efabff63b35ef6311d8b2,PodSandboxId:545f703a536419fe3943bf5edc70358b9b4e01dc42f06878a61599d7480e28d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739183666017358318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cjf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f9868e3-56a5-44c2-8114-959f0fc9e24f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a0734289f10df2e83852054155b17f519275f69009355e9e2a5
2a04d07a379,PodSandboxId:0429495111c8e977796da8809c74aab3de45b8aadafd8e74bdde1414d9279a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739183663139471995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gt77j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c90dafe-4fae-4761-b3d0-99cc5a66a0c3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363d7872b90aa24c658b18d6deb8f08946dfec19aa2b2b7c0184a3f7240fbb77,PodSandboxId:416fcadf
741d0ac70a73ad49d421a488f48e43d33de4cf818e56d50278965bbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739183650658374137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de62839201adadd1fa3b5bbed8f9b81,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752f276c4ec307e4d54d8637eb15addb017467aace1b37a0e21b4126c17e4b4a,PodSandboxId:27962e04df33f8e74eb413312deef79f3dfbd7d42614d2a2bd0de64
d07cf19dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739183650621301679,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b9ce3ef3a7488c17deb45773a45af9,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd3ae854d6c175eae404b039e505249034390880d50d3d5f9aefa579373b057,PodSandboxId:8ecad3e63714ef944dd5258ac692366bd9534cce681bf3b349b3d1512d7c742a,Metadat
a:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739183650584577967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6abbb095762c1f74d0a65d58ff8d9b60,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b4c88f533dd651b0efdbe4e4490d2c720fab084d142719531bfd95dd2e1bdf,PodSandboxId:ed6aee168a58a36fcdf73aa9a15461faa0318ea252ad29e5b0d0aa8be081a8d
c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739183650535310364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-176336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 992088461af9c30230a13ab52cb32e6e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5c2752b-8c92-461d-a04e-b2e948ce70a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52a8c2f753ff0       docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da                              2 minutes ago       Running             nginx                     0                   d7ad2f942453b       nginx
	5f9d135cd2cd3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6ab63b6eb3e33       busybox
	e63df5b4f8e1d       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   3a3c9a28f0c85       ingress-nginx-controller-56d7c84fd4-8h9kx
	50e4352e597d1       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   5e7b5c1adbfcd       ingress-nginx-admission-patch-2cfdd
	16713f6f6904f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   0e173b18a40c3       ingress-nginx-admission-create-lqthz
	6c82f9fd725b7       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   4972dfe079086       amd-gpu-device-plugin-dh7cx
	87b2fb9a4e516       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   8728578287e1f       kube-ingress-dns-minikube
	89af92114d578       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   34dcd002f7fe9       storage-provisioner
	e0a002e79df38       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   545f703a53641       coredns-668d6bf9bc-cjf5q
	18a0734289f10       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   0429495111c8e       kube-proxy-gt77j
	363d7872b90aa       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   416fcadf741d0       etcd-addons-176336
	752f276c4ec30       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   27962e04df33f       kube-scheduler-addons-176336
	bdd3ae854d6c1       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   8ecad3e63714e       kube-controller-manager-addons-176336
	09b4c88f533dd       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   ed6aee168a58a       kube-apiserver-addons-176336
	
	
	==> coredns [e0a002e79df38ecadd0017a267050049e8ca2995bd5efabff63b35ef6311d8b2] <==
	[INFO] 10.244.0.8:48904 - 21841 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000810127s
	[INFO] 10.244.0.8:48904 - 54025 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00011432s
	[INFO] 10.244.0.8:48904 - 31345 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000557265s
	[INFO] 10.244.0.8:48904 - 18822 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000104162s
	[INFO] 10.244.0.8:48904 - 44479 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000231643s
	[INFO] 10.244.0.8:48904 - 14558 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000122724s
	[INFO] 10.244.0.8:48904 - 25476 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000244837s
	[INFO] 10.244.0.8:36072 - 44649 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000139533s
	[INFO] 10.244.0.8:36072 - 44373 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000138095s
	[INFO] 10.244.0.8:47537 - 53506 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000103422s
	[INFO] 10.244.0.8:47537 - 53309 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142916s
	[INFO] 10.244.0.8:47577 - 22646 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086344s
	[INFO] 10.244.0.8:47577 - 22387 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000246249s
	[INFO] 10.244.0.8:35586 - 51627 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000159807s
	[INFO] 10.244.0.8:35586 - 51219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000323922s
	[INFO] 10.244.0.23:43715 - 17874 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000268178s
	[INFO] 10.244.0.23:33242 - 6555 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000139772s
	[INFO] 10.244.0.23:54958 - 49265 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114136s
	[INFO] 10.244.0.23:44346 - 27586 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099874s
	[INFO] 10.244.0.23:47443 - 18559 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094727s
	[INFO] 10.244.0.23:50062 - 4898 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000492s
	[INFO] 10.244.0.23:51750 - 43305 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003656817s
	[INFO] 10.244.0.23:42251 - 65495 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004051534s
	[INFO] 10.244.0.27:55375 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000462964s
	[INFO] 10.244.0.27:45068 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141862s
	
	
	==> describe nodes <==
	Name:               addons-176336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-176336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=addons-176336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T10_34_16_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-176336
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 10:34:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-176336
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 10:38:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 10:36:50 +0000   Mon, 10 Feb 2025 10:34:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 10:36:50 +0000   Mon, 10 Feb 2025 10:34:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 10:36:50 +0000   Mon, 10 Feb 2025 10:34:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 10:36:50 +0000   Mon, 10 Feb 2025 10:34:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    addons-176336
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b9c820a27fa45e6a56563d89afe30b4
	  System UUID:                9b9c820a-27fa-45e6-a565-63d89afe30b4
	  Boot ID:                    4c7bf3af-25b5-445d-afb4-e04ad56842aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-7d9564db4-6xxfm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-8h9kx    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m28s
	  kube-system                 amd-gpu-device-plugin-dh7cx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-668d6bf9bc-cjf5q                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-addons-176336                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m40s
	  kube-system                 kube-apiserver-addons-176336                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-controller-manager-addons-176336        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-gt77j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-scheduler-addons-176336                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s (x7 over 4m46s)  kubelet          Node addons-176336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x7 over 4m46s)  kubelet          Node addons-176336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x6 over 4m46s)  kubelet          Node addons-176336 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s                  kubelet          Node addons-176336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s                  kubelet          Node addons-176336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s                  kubelet          Node addons-176336 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s                  kubelet          Node addons-176336 status is now: NodeReady
	  Normal  RegisteredNode           4m37s                  node-controller  Node addons-176336 event: Registered Node addons-176336 in Controller
	
	
	==> dmesg <==
	[  +6.232309] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[  +0.092302] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.045748] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +0.120498] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.017894] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.072084] kauditd_printk_skb: 147 callbacks suppressed
	[  +5.453260] kauditd_printk_skb: 62 callbacks suppressed
	[Feb10 10:35] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.874652] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.613527] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.769015] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.046560] kauditd_printk_skb: 45 callbacks suppressed
	[  +7.306689] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.925288] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.328438] kauditd_printk_skb: 7 callbacks suppressed
	[Feb10 10:36] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.361496] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.018478] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.642591] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.163044] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.095551] kauditd_printk_skb: 77 callbacks suppressed
	[  +8.280229] kauditd_printk_skb: 18 callbacks suppressed
	[Feb10 10:37] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.322788] kauditd_printk_skb: 7 callbacks suppressed
	[Feb10 10:38] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [363d7872b90aa24c658b18d6deb8f08946dfec19aa2b2b7c0184a3f7240fbb77] <==
	{"level":"warn","ts":"2025-02-10T10:35:34.777278Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.637212ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T10:35:34.777311Z","caller":"traceutil/trace.go:171","msg":"trace[570657917] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1067; }","duration":"316.678908ms","start":"2025-02-10T10:35:34.460626Z","end":"2025-02-10T10:35:34.777304Z","steps":["trace[570657917] 'agreement among raft nodes before linearized reading'  (duration: 316.591755ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T10:35:34.777279Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T10:35:34.457509Z","time spent":"319.718339ms","remote":"127.0.0.1:56828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1050 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-02-10T10:35:34.777189Z","caller":"traceutil/trace.go:171","msg":"trace[523323323] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1097; }","duration":"316.536271ms","start":"2025-02-10T10:35:34.460631Z","end":"2025-02-10T10:35:34.777167Z","steps":["trace[523323323] 'read index received'  (duration: 316.474098ms)","trace[523323323] 'applied index is now lower than readState.Index'  (duration: 60.934µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T10:35:34.857348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.503158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-02-10T10:35:34.857387Z","caller":"traceutil/trace.go:171","msg":"trace[1624104970] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1068; }","duration":"314.574358ms","start":"2025-02-10T10:35:34.542804Z","end":"2025-02-10T10:35:34.857378Z","steps":["trace[1624104970] 'agreement among raft nodes before linearized reading'  (duration: 314.395653ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T10:35:34.857407Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T10:35:34.542790Z","time spent":"314.612277ms","remote":"127.0.0.1:56896","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 "}
	{"level":"info","ts":"2025-02-10T10:35:34.857505Z","caller":"traceutil/trace.go:171","msg":"trace[7708873] transaction","detail":"{read_only:false; response_revision:1068; number_of_response:1; }","duration":"107.701217ms","start":"2025-02-10T10:35:34.749796Z","end":"2025-02-10T10:35:34.857497Z","steps":["trace[7708873] 'process raft request'  (duration: 101.484411ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T10:35:34.857591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.409994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T10:35:34.857605Z","caller":"traceutil/trace.go:171","msg":"trace[988812503] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1068; }","duration":"286.443601ms","start":"2025-02-10T10:35:34.571157Z","end":"2025-02-10T10:35:34.857601Z","steps":["trace[988812503] 'agreement among raft nodes before linearized reading'  (duration: 286.41727ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T10:35:41.815023Z","caller":"traceutil/trace.go:171","msg":"trace[1653725756] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"145.137907ms","start":"2025-02-10T10:35:41.669869Z","end":"2025-02-10T10:35:41.815007Z","steps":["trace[1653725756] 'process raft request'  (duration: 145.012117ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T10:36:12.681816Z","caller":"traceutil/trace.go:171","msg":"trace[1585574848] linearizableReadLoop","detail":"{readStateIndex:1283; appliedIndex:1282; }","duration":"221.304344ms","start":"2025-02-10T10:36:12.460490Z","end":"2025-02-10T10:36:12.681794Z","steps":["trace[1585574848] 'read index received'  (duration: 221.182296ms)","trace[1585574848] 'applied index is now lower than readState.Index'  (duration: 121.627µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T10:36:12.682011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.497196ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T10:36:12.682105Z","caller":"traceutil/trace.go:171","msg":"trace[77476281] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1242; }","duration":"221.613597ms","start":"2025-02-10T10:36:12.460485Z","end":"2025-02-10T10:36:12.682098Z","steps":["trace[77476281] 'agreement among raft nodes before linearized reading'  (duration: 221.453364ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T10:36:12.682481Z","caller":"traceutil/trace.go:171","msg":"trace[890990080] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1242; }","duration":"328.408384ms","start":"2025-02-10T10:36:12.353835Z","end":"2025-02-10T10:36:12.682243Z","steps":["trace[890990080] 'process raft request'  (duration: 327.871962ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T10:36:12.683311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T10:36:12.353824Z","time spent":"328.749476ms","remote":"127.0.0.1:56756","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":50,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/configmaps/gcp-auth/kube-root-ca.crt\" mod_revision:814 > success:<request_delete_range:<key:\"/registry/configmaps/gcp-auth/kube-root-ca.crt\" > > failure:<request_range:<key:\"/registry/configmaps/gcp-auth/kube-root-ca.crt\" > >"}
	{"level":"info","ts":"2025-02-10T10:36:41.417674Z","caller":"traceutil/trace.go:171","msg":"trace[695770639] transaction","detail":"{read_only:false; response_revision:1564; number_of_response:1; }","duration":"355.982775ms","start":"2025-02-10T10:36:41.061672Z","end":"2025-02-10T10:36:41.417655Z","steps":["trace[695770639] 'process raft request'  (duration: 355.843901ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T10:36:41.419778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T10:36:41.061658Z","time spent":"358.00923ms","remote":"127.0.0.1:56896","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1518 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2025-02-10T10:36:41.418867Z","caller":"traceutil/trace.go:171","msg":"trace[164330331] linearizableReadLoop","detail":"{readStateIndex:1619; appliedIndex:1618; }","duration":"247.451328ms","start":"2025-02-10T10:36:41.171400Z","end":"2025-02-10T10:36:41.418852Z","steps":["trace[164330331] 'read index received'  (duration: 246.367057ms)","trace[164330331] 'applied index is now lower than readState.Index'  (duration: 1.083648ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T10:36:41.419077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.623022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-02-10T10:36:41.420060Z","caller":"traceutil/trace.go:171","msg":"trace[1233548511] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1565; }","duration":"248.665406ms","start":"2025-02-10T10:36:41.171367Z","end":"2025-02-10T10:36:41.420032Z","steps":["trace[1233548511] 'agreement among raft nodes before linearized reading'  (duration: 247.527001ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T10:36:41.420521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.67999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" limit:1 ","response":"range_response_count:1 size:588"}
	{"level":"info","ts":"2025-02-10T10:36:41.420586Z","caller":"traceutil/trace.go:171","msg":"trace[954031731] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:1565; }","duration":"194.750232ms","start":"2025-02-10T10:36:41.225827Z","end":"2025-02-10T10:36:41.420577Z","steps":["trace[954031731] 'agreement among raft nodes before linearized reading'  (duration: 194.620222ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T10:36:41.419248Z","caller":"traceutil/trace.go:171","msg":"trace[330317349] transaction","detail":"{read_only:false; response_revision:1565; number_of_response:1; }","duration":"274.427449ms","start":"2025-02-10T10:36:41.144809Z","end":"2025-02-10T10:36:41.419236Z","steps":["trace[330317349] 'process raft request'  (duration: 273.962288ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T10:37:17.276525Z","caller":"traceutil/trace.go:171","msg":"trace[1745221128] transaction","detail":"{read_only:false; response_revision:1710; number_of_response:1; }","duration":"152.492349ms","start":"2025-02-10T10:37:17.123996Z","end":"2025-02-10T10:37:17.276489Z","steps":["trace[1745221128] 'process raft request'  (duration: 152.376813ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:38:56 up 5 min,  0 users,  load average: 0.64, 1.28, 0.67
	Linux addons-176336 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [09b4c88f533dd651b0efdbe4e4490d2c720fab084d142719531bfd95dd2e1bdf] <==
	E0210 10:35:06.521869       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0210 10:36:04.125047       1 conn.go:339] Error on socket receive: read tcp 192.168.39.19:8443->192.168.39.1:43782: use of closed network connection
	E0210 10:36:04.321050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.19:8443->192.168.39.1:43816: use of closed network connection
	I0210 10:36:13.699916       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.173.78"}
	I0210 10:36:31.763452       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0210 10:36:31.949003       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.91.58"}
	I0210 10:36:36.206889       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0210 10:36:37.255260       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0210 10:36:48.405064       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0210 10:36:49.837709       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0210 10:37:07.481968       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0210 10:37:18.093632       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 10:37:18.093687       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 10:37:18.120873       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 10:37:18.120983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 10:37:18.152832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 10:37:18.152877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 10:37:18.166641       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 10:37:18.167078       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 10:37:18.228001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 10:37:18.228058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0210 10:37:19.154225       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0210 10:37:19.228067       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0210 10:37:19.294027       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0210 10:38:55.191798       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.64.85"}
	
	
	==> kube-controller-manager [bdd3ae854d6c175eae404b039e505249034390880d50d3d5f9aefa579373b057] <==
	E0210 10:37:53.047867       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 10:37:56.229931       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 10:37:56.231028       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0210 10:37:56.231870       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 10:37:56.231931       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 10:38:24.572697       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 10:38:24.573630       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0210 10:38:24.574492       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 10:38:24.574545       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 10:38:30.021989       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 10:38:30.023034       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0210 10:38:30.024010       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 10:38:30.024092       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 10:38:31.242755       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 10:38:31.243706       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0210 10:38:31.244522       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 10:38:31.244580       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 10:38:38.349793       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 10:38:38.350900       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0210 10:38:38.351717       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 10:38:38.351811       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0210 10:38:55.015662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.858257ms"
	I0210 10:38:55.039380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="23.298678ms"
	I0210 10:38:55.052353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.926782ms"
	I0210 10:38:55.052518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="61.071µs"
	
	
	==> kube-proxy [18a0734289f10df2e83852054155b17f519275f69009355e9e2a52a04d07a379] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 10:34:24.000988       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 10:34:24.013664       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E0210 10:34:24.013741       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 10:34:24.095452       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 10:34:24.095497       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 10:34:24.095520       1 server_linux.go:170] "Using iptables Proxier"
	I0210 10:34:24.100088       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 10:34:24.100851       1 server.go:497] "Version info" version="v1.32.1"
	I0210 10:34:24.100864       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 10:34:24.103826       1 config.go:199] "Starting service config controller"
	I0210 10:34:24.103897       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 10:34:24.103924       1 config.go:105] "Starting endpoint slice config controller"
	I0210 10:34:24.103942       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 10:34:24.114991       1 config.go:329] "Starting node config controller"
	I0210 10:34:24.115018       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 10:34:24.208413       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 10:34:24.208444       1 shared_informer.go:320] Caches are synced for service config
	I0210 10:34:24.216178       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [752f276c4ec307e4d54d8637eb15addb017467aace1b37a0e21b4126c17e4b4a] <==
	W0210 10:34:13.283573       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 10:34:13.283683       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:13.283861       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 10:34:13.284689       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:13.284299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 10:34:13.284797       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:13.284421       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 10:34:13.284851       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:13.284523       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 10:34:13.284890       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:14.118683       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 10:34:14.118801       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:14.208332       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 10:34:14.208512       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:14.208722       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 10:34:14.208882       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:14.209104       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 10:34:14.210027       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:14.272912       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0210 10:34:14.273506       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:14.278700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0210 10:34:14.278766       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0210 10:34:14.354043       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 10:34:14.354092       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 10:34:14.867041       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 10:38:16 addons-176336 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 10:38:16 addons-176336 kubelet[1229]: E0210 10:38:16.638586    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183896638213461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:16 addons-176336 kubelet[1229]: E0210 10:38:16.638667    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183896638213461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:23 addons-176336 kubelet[1229]: I0210 10:38:23.134925    1229 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 10 10:38:26 addons-176336 kubelet[1229]: E0210 10:38:26.641622    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183906641195628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:26 addons-176336 kubelet[1229]: E0210 10:38:26.641690    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183906641195628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:33 addons-176336 kubelet[1229]: I0210 10:38:33.134596    1229 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-dh7cx" secret="" err="secret \"gcp-auth\" not found"
	Feb 10 10:38:36 addons-176336 kubelet[1229]: E0210 10:38:36.644329    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183916643955267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:36 addons-176336 kubelet[1229]: E0210 10:38:36.644367    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183916643955267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:46 addons-176336 kubelet[1229]: E0210 10:38:46.647801    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183926647331769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:46 addons-176336 kubelet[1229]: E0210 10:38:46.648079    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183926647331769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006308    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="853c4728-1092-4951-805d-db078866aa70" containerName="task-pv-container"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006352    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="84664bc1-fc4d-4ea8-a71a-5933b7e45ceb" containerName="csi-attacher"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006361    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="838867ed-d927-4b16-bc48-a753cebc7ce1" containerName="csi-provisioner"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006366    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="bef3f7cb-ce84-4a94-ac84-29587b986a05" containerName="volume-snapshot-controller"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006371    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="838867ed-d927-4b16-bc48-a753cebc7ce1" containerName="node-driver-registrar"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006376    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="838867ed-d927-4b16-bc48-a753cebc7ce1" containerName="csi-external-health-monitor-controller"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006381    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="838867ed-d927-4b16-bc48-a753cebc7ce1" containerName="liveness-probe"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006385    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="838867ed-d927-4b16-bc48-a753cebc7ce1" containerName="csi-snapshotter"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006390    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="838867ed-d927-4b16-bc48-a753cebc7ce1" containerName="hostpath"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006396    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="936bee80-ad8e-4e5c-b8c0-293f2fea5d8a" containerName="csi-resizer"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.006401    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="7a51c1e1-4c03-4bc1-abc3-8806f8820eeb" containerName="volume-snapshot-controller"
	Feb 10 10:38:55 addons-176336 kubelet[1229]: I0210 10:38:55.106764    1229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s5ft\" (UniqueName: \"kubernetes.io/projected/ea81429c-cd50-4ef1-8c73-0ef7783a9826-kube-api-access-2s5ft\") pod \"hello-world-app-7d9564db4-6xxfm\" (UID: \"ea81429c-cd50-4ef1-8c73-0ef7783a9826\") " pod="default/hello-world-app-7d9564db4-6xxfm"
	Feb 10 10:38:56 addons-176336 kubelet[1229]: E0210 10:38:56.653330    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183936650897795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 10:38:56 addons-176336 kubelet[1229]: E0210 10:38:56.653373    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739183936650897795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [89af92114d5787de109044edb4467f2cb0f760283fa70e5be18231f605239983] <==
	I0210 10:34:28.579506       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 10:34:28.626247       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 10:34:28.626318       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0210 10:34:28.802254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0210 10:34:28.806996       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed73bd14-662c-4b22-95c4-75a9362d4297", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-176336_5bbbce70-4fea-46c9-8a1f-7340e5a6db61 became leader
	I0210 10:34:28.807186       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-176336_5bbbce70-4fea-46c9-8a1f-7340e5a6db61!
	I0210 10:34:29.007392       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-176336_5bbbce70-4fea-46c9-8a1f-7340e5a6db61!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-176336 -n addons-176336
helpers_test.go:261: (dbg) Run:  kubectl --context addons-176336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-6xxfm ingress-nginx-admission-create-lqthz ingress-nginx-admission-patch-2cfdd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-176336 describe pod hello-world-app-7d9564db4-6xxfm ingress-nginx-admission-create-lqthz ingress-nginx-admission-patch-2cfdd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-176336 describe pod hello-world-app-7d9564db4-6xxfm ingress-nginx-admission-create-lqthz ingress-nginx-admission-patch-2cfdd: exit status 1 (65.664571ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-6xxfm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-176336/192.168.39.19
	Start Time:       Mon, 10 Feb 2025 10:38:55 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2s5ft (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2s5ft:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-6xxfm to addons-176336
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lqthz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2cfdd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-176336 describe pod hello-world-app-7d9564db4-6xxfm ingress-nginx-admission-create-lqthz ingress-nginx-admission-patch-2cfdd: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 addons disable ingress-dns --alsologtostderr -v=1: (1.280620788s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 addons disable ingress --alsologtostderr -v=1: (7.659563305s)
--- FAIL: TestAddons/parallel/Ingress (154.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 ssh pgrep buildkitd: exit status 1 (210.284619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image build -t localhost/my-image:functional-567541 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 image build -t localhost/my-image:functional-567541 testdata/build --alsologtostderr: (3.867418846s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567541 image build -t localhost/my-image:functional-567541 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7a16eb6a92b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-567541
--> 959e671938a
Successfully tagged localhost/my-image:functional-567541
959e671938ada138f2808a996b463e89ed6eb08efda045d35b1165f303d57e61
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567541 image build -t localhost/my-image:functional-567541 testdata/build --alsologtostderr:
I0210 10:44:33.910509  125375 out.go:345] Setting OutFile to fd 1 ...
I0210 10:44:33.910688  125375 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.910699  125375 out.go:358] Setting ErrFile to fd 2...
I0210 10:44:33.910704  125375 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.910880  125375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
I0210 10:44:33.911511  125375 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.912232  125375 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.912602  125375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.912644  125375 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.928275  125375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
I0210 10:44:33.928711  125375 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.929244  125375 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.929265  125375 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.929659  125375 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.929883  125375 main.go:141] libmachine: (functional-567541) Calling .GetState
I0210 10:44:33.931858  125375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.931914  125375 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.946711  125375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
I0210 10:44:33.947148  125375 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.947660  125375 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.947682  125375 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.948017  125375 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.948223  125375 main.go:141] libmachine: (functional-567541) Calling .DriverName
I0210 10:44:33.948419  125375 ssh_runner.go:195] Run: systemctl --version
I0210 10:44:33.948446  125375 main.go:141] libmachine: (functional-567541) Calling .GetSSHHostname
I0210 10:44:33.951287  125375 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.951691  125375 main.go:141] libmachine: (functional-567541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3f:3d", ip: ""} in network mk-functional-567541: {Iface:virbr1 ExpiryTime:2025-02-10 11:41:42 +0000 UTC Type:0 Mac:52:54:00:fe:3f:3d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-567541 Clientid:01:52:54:00:fe:3f:3d}
I0210 10:44:33.951716  125375 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined IP address 192.168.39.8 and MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.951851  125375 main.go:141] libmachine: (functional-567541) Calling .GetSSHPort
I0210 10:44:33.952022  125375 main.go:141] libmachine: (functional-567541) Calling .GetSSHKeyPath
I0210 10:44:33.952188  125375 main.go:141] libmachine: (functional-567541) Calling .GetSSHUsername
I0210 10:44:33.952340  125375 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/functional-567541/id_rsa Username:docker}
I0210 10:44:34.041786  125375 build_images.go:161] Building image from path: /tmp/build.3745782601.tar
I0210 10:44:34.041839  125375 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0210 10:44:34.052458  125375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3745782601.tar
I0210 10:44:34.057476  125375 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3745782601.tar: stat -c "%s %y" /var/lib/minikube/build/build.3745782601.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3745782601.tar': No such file or directory
I0210 10:44:34.057505  125375 ssh_runner.go:362] scp /tmp/build.3745782601.tar --> /var/lib/minikube/build/build.3745782601.tar (3072 bytes)
I0210 10:44:34.083725  125375 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3745782601
I0210 10:44:34.094004  125375 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3745782601 -xf /var/lib/minikube/build/build.3745782601.tar
I0210 10:44:34.104110  125375 crio.go:315] Building image: /var/lib/minikube/build/build.3745782601
I0210 10:44:34.104161  125375 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-567541 /var/lib/minikube/build/build.3745782601 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0210 10:44:37.680620  125375 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-567541 /var/lib/minikube/build/build.3745782601 --cgroup-manager=cgroupfs: (3.576416998s)
I0210 10:44:37.680720  125375 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3745782601
I0210 10:44:37.703583  125375 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3745782601.tar
I0210 10:44:37.726619  125375 build_images.go:217] Built localhost/my-image:functional-567541 from /tmp/build.3745782601.tar
I0210 10:44:37.726655  125375 build_images.go:133] succeeded building to: functional-567541
I0210 10:44:37.726662  125375 build_images.go:134] failed building to: 
I0210 10:44:37.726720  125375 main.go:141] libmachine: Making call to close driver server
I0210 10:44:37.726741  125375 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:37.727016  125375 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:37.727057  125375 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:44:37.727061  125375 main.go:141] libmachine: (functional-567541) DBG | Closing plugin on server side
I0210 10:44:37.727075  125375 main.go:141] libmachine: Making call to close driver server
I0210 10:44:37.727090  125375 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:37.727347  125375 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:37.727372  125375 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls
functional_test.go:468: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 image ls: (2.658450332s)
functional_test.go:463: expected "localhost/my-image:functional-567541" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.74s)

                                                
                                    
x
+
TestPreload (169.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-971370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-971370 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m28.585740037s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-971370 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-971370 image pull gcr.io/k8s-minikube/busybox: (3.02111523s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-971370
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-971370: (7.286233782s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-971370 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0210 11:29:06.276510  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-971370 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.582991668s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-971370 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-10 11:29:06.887809658 +0000 UTC m=+3371.658095609
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-971370 -n test-preload-971370
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-971370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-971370 logs -n 25: (1.010517222s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-646190 ssh -n                                                                 | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:14 UTC |
	|         | multinode-646190-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-646190 ssh -n multinode-646190 sudo cat                                       | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:14 UTC |
	|         | /home/docker/cp-test_multinode-646190-m03_multinode-646190.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-646190 cp multinode-646190-m03:/home/docker/cp-test.txt                       | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:14 UTC |
	|         | multinode-646190-m02:/home/docker/cp-test_multinode-646190-m03_multinode-646190-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-646190 ssh -n                                                                 | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:14 UTC |
	|         | multinode-646190-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-646190 ssh -n multinode-646190-m02 sudo cat                                   | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:14 UTC |
	|         | /home/docker/cp-test_multinode-646190-m03_multinode-646190-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-646190 node stop m03                                                          | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:14 UTC |
	| node    | multinode-646190 node start                                                             | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-646190                                                                | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC |                     |
	| stop    | -p multinode-646190                                                                     | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:14 UTC | 10 Feb 25 11:17 UTC |
	| start   | -p multinode-646190                                                                     | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:20 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-646190                                                                | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:20 UTC |                     |
	| node    | multinode-646190 node delete                                                            | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:20 UTC | 10 Feb 25 11:20 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-646190 stop                                                                   | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:20 UTC | 10 Feb 25 11:23 UTC |
	| start   | -p multinode-646190                                                                     | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:23 UTC | 10 Feb 25 11:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-646190                                                                | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:25 UTC |                     |
	| start   | -p multinode-646190-m02                                                                 | multinode-646190-m02 | jenkins | v1.35.0 | 10 Feb 25 11:25 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-646190-m03                                                                 | multinode-646190-m03 | jenkins | v1.35.0 | 10 Feb 25 11:25 UTC | 10 Feb 25 11:26 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-646190                                                                 | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:26 UTC |                     |
	| delete  | -p multinode-646190-m03                                                                 | multinode-646190-m03 | jenkins | v1.35.0 | 10 Feb 25 11:26 UTC | 10 Feb 25 11:26 UTC |
	| delete  | -p multinode-646190                                                                     | multinode-646190     | jenkins | v1.35.0 | 10 Feb 25 11:26 UTC | 10 Feb 25 11:26 UTC |
	| start   | -p test-preload-971370                                                                  | test-preload-971370  | jenkins | v1.35.0 | 10 Feb 25 11:26 UTC | 10 Feb 25 11:27 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-971370 image pull                                                          | test-preload-971370  | jenkins | v1.35.0 | 10 Feb 25 11:27 UTC | 10 Feb 25 11:27 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-971370                                                                  | test-preload-971370  | jenkins | v1.35.0 | 10 Feb 25 11:27 UTC | 10 Feb 25 11:27 UTC |
	| start   | -p test-preload-971370                                                                  | test-preload-971370  | jenkins | v1.35.0 | 10 Feb 25 11:27 UTC | 10 Feb 25 11:29 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-971370 image list                                                          | test-preload-971370  | jenkins | v1.35.0 | 10 Feb 25 11:29 UTC | 10 Feb 25 11:29 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 11:27:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 11:27:59.134997  147603 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:27:59.135114  147603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:27:59.135125  147603 out.go:358] Setting ErrFile to fd 2...
	I0210 11:27:59.135133  147603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:27:59.135386  147603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:27:59.135939  147603 out.go:352] Setting JSON to false
	I0210 11:27:59.136854  147603 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7821,"bootTime":1739179058,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:27:59.136973  147603 start.go:139] virtualization: kvm guest
	I0210 11:27:59.139098  147603 out.go:177] * [test-preload-971370] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:27:59.140445  147603 notify.go:220] Checking for updates...
	I0210 11:27:59.140449  147603 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:27:59.141798  147603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:27:59.142986  147603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:27:59.144126  147603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:27:59.145247  147603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:27:59.146228  147603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:27:59.147629  147603 config.go:182] Loaded profile config "test-preload-971370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0210 11:27:59.148009  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:27:59.148053  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:27:59.162812  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37453
	I0210 11:27:59.163280  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:27:59.163829  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:27:59.163852  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:27:59.164192  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:27:59.164403  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:27:59.165866  147603 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 11:27:59.166878  147603 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:27:59.167166  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:27:59.167221  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:27:59.182006  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0210 11:27:59.182490  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:27:59.182903  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:27:59.182925  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:27:59.183307  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:27:59.183489  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:27:59.218531  147603 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 11:27:59.219724  147603 start.go:297] selected driver: kvm2
	I0210 11:27:59.219742  147603 start.go:901] validating driver "kvm2" against &{Name:test-preload-971370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-971370
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:27:59.219882  147603 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:27:59.220892  147603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:27:59.220997  147603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:27:59.237656  147603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:27:59.237995  147603 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:27:59.238024  147603 cni.go:84] Creating CNI manager for ""
	I0210 11:27:59.238063  147603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:27:59.238113  147603 start.go:340] cluster config:
	{Name:test-preload-971370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-971370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:27:59.238250  147603 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:27:59.239999  147603 out.go:177] * Starting "test-preload-971370" primary control-plane node in "test-preload-971370" cluster
	I0210 11:27:59.241066  147603 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 11:28:00.122331  147603 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0210 11:28:00.122398  147603 cache.go:56] Caching tarball of preloaded images
	I0210 11:28:00.122544  147603 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 11:28:00.124262  147603 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0210 11:28:00.125326  147603 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 11:28:00.223957  147603 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0210 11:28:11.460207  147603 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 11:28:11.460311  147603 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 11:28:12.434658  147603 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0210 11:28:12.434789  147603 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/config.json ...
	I0210 11:28:12.435018  147603 start.go:360] acquireMachinesLock for test-preload-971370: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:28:12.435089  147603 start.go:364] duration metric: took 47.674µs to acquireMachinesLock for "test-preload-971370"
	I0210 11:28:12.435103  147603 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:28:12.435112  147603 fix.go:54] fixHost starting: 
	I0210 11:28:12.435443  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:28:12.435479  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:28:12.450249  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0210 11:28:12.450745  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:28:12.451336  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:28:12.451371  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:28:12.451702  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:28:12.451941  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:12.452080  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetState
	I0210 11:28:12.453692  147603 fix.go:112] recreateIfNeeded on test-preload-971370: state=Stopped err=<nil>
	I0210 11:28:12.453716  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	W0210 11:28:12.453908  147603 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:28:12.455674  147603 out.go:177] * Restarting existing kvm2 VM for "test-preload-971370" ...
	I0210 11:28:12.456840  147603 main.go:141] libmachine: (test-preload-971370) Calling .Start
	I0210 11:28:12.457025  147603 main.go:141] libmachine: (test-preload-971370) starting domain...
	I0210 11:28:12.457041  147603 main.go:141] libmachine: (test-preload-971370) ensuring networks are active...
	I0210 11:28:12.457723  147603 main.go:141] libmachine: (test-preload-971370) Ensuring network default is active
	I0210 11:28:12.458001  147603 main.go:141] libmachine: (test-preload-971370) Ensuring network mk-test-preload-971370 is active
	I0210 11:28:12.458460  147603 main.go:141] libmachine: (test-preload-971370) getting domain XML...
	I0210 11:28:12.459198  147603 main.go:141] libmachine: (test-preload-971370) creating domain...
	I0210 11:28:13.644913  147603 main.go:141] libmachine: (test-preload-971370) waiting for IP...
	I0210 11:28:13.645736  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:13.646067  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:13.646183  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:13.646074  147687 retry.go:31] will retry after 198.192024ms: waiting for domain to come up
	I0210 11:28:13.845722  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:13.846185  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:13.846225  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:13.846162  147687 retry.go:31] will retry after 311.448966ms: waiting for domain to come up
	I0210 11:28:14.159665  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:14.160231  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:14.160265  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:14.160183  147687 retry.go:31] will retry after 332.891093ms: waiting for domain to come up
	I0210 11:28:14.494800  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:14.495251  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:14.495278  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:14.495228  147687 retry.go:31] will retry after 366.702609ms: waiting for domain to come up
	I0210 11:28:14.863859  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:14.864226  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:14.864253  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:14.864207  147687 retry.go:31] will retry after 482.437514ms: waiting for domain to come up
	I0210 11:28:15.347839  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:15.348241  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:15.348265  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:15.348217  147687 retry.go:31] will retry after 641.51796ms: waiting for domain to come up
	I0210 11:28:15.991021  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:15.991625  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:15.991645  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:15.991587  147687 retry.go:31] will retry after 757.333113ms: waiting for domain to come up
	I0210 11:28:16.750521  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:16.750913  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:16.750950  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:16.750895  147687 retry.go:31] will retry after 1.005674061s: waiting for domain to come up
	I0210 11:28:17.758508  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:17.758916  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:17.758957  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:17.758893  147687 retry.go:31] will retry after 1.335585183s: waiting for domain to come up
	I0210 11:28:19.096385  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:19.096770  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:19.096795  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:19.096745  147687 retry.go:31] will retry after 1.578923252s: waiting for domain to come up
	I0210 11:28:20.677328  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:20.677749  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:20.677775  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:20.677719  147687 retry.go:31] will retry after 2.392857056s: waiting for domain to come up
	I0210 11:28:23.072657  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:23.073073  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:23.073122  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:23.073040  147687 retry.go:31] will retry after 2.552488071s: waiting for domain to come up
	I0210 11:28:25.628754  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:25.629101  147603 main.go:141] libmachine: (test-preload-971370) DBG | unable to find current IP address of domain test-preload-971370 in network mk-test-preload-971370
	I0210 11:28:25.629129  147603 main.go:141] libmachine: (test-preload-971370) DBG | I0210 11:28:25.629064  147687 retry.go:31] will retry after 3.226063716s: waiting for domain to come up
	I0210 11:28:28.856800  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.857210  147603 main.go:141] libmachine: (test-preload-971370) found domain IP: 192.168.39.60
	I0210 11:28:28.857237  147603 main.go:141] libmachine: (test-preload-971370) reserving static IP address...
	I0210 11:28:28.857251  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has current primary IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.857661  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "test-preload-971370", mac: "52:54:00:ca:67:e5", ip: "192.168.39.60"} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:28.857696  147603 main.go:141] libmachine: (test-preload-971370) DBG | skip adding static IP to network mk-test-preload-971370 - found existing host DHCP lease matching {name: "test-preload-971370", mac: "52:54:00:ca:67:e5", ip: "192.168.39.60"}
	I0210 11:28:28.857719  147603 main.go:141] libmachine: (test-preload-971370) reserved static IP address 192.168.39.60 for domain test-preload-971370
	I0210 11:28:28.857734  147603 main.go:141] libmachine: (test-preload-971370) DBG | Getting to WaitForSSH function...
	I0210 11:28:28.857748  147603 main.go:141] libmachine: (test-preload-971370) waiting for SSH...
	I0210 11:28:28.859860  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.860195  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:28.860226  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.860334  147603 main.go:141] libmachine: (test-preload-971370) DBG | Using SSH client type: external
	I0210 11:28:28.860362  147603 main.go:141] libmachine: (test-preload-971370) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa (-rw-------)
	I0210 11:28:28.860394  147603 main.go:141] libmachine: (test-preload-971370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:28:28.860407  147603 main.go:141] libmachine: (test-preload-971370) DBG | About to run SSH command:
	I0210 11:28:28.860417  147603 main.go:141] libmachine: (test-preload-971370) DBG | exit 0
	I0210 11:28:28.978913  147603 main.go:141] libmachine: (test-preload-971370) DBG | SSH cmd err, output: <nil>: 
	I0210 11:28:28.979342  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetConfigRaw
	I0210 11:28:28.980086  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetIP
	I0210 11:28:28.982473  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.982840  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:28.982877  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.983050  147603 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/config.json ...
	I0210 11:28:28.983263  147603 machine.go:93] provisionDockerMachine start ...
	I0210 11:28:28.983283  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:28.983485  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:28.985734  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.986071  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:28.986099  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:28.986293  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:28.986476  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:28.986627  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:28.986736  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:28.986891  147603 main.go:141] libmachine: Using SSH client type: native
	I0210 11:28:28.987154  147603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0210 11:28:28.987169  147603 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:28:29.082934  147603 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:28:29.082964  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetMachineName
	I0210 11:28:29.083352  147603 buildroot.go:166] provisioning hostname "test-preload-971370"
	I0210 11:28:29.083381  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetMachineName
	I0210 11:28:29.083610  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:29.085863  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.086163  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:29.086196  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.086278  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:29.086473  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.086646  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.086782  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:29.086935  147603 main.go:141] libmachine: Using SSH client type: native
	I0210 11:28:29.087096  147603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0210 11:28:29.087108  147603 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-971370 && echo "test-preload-971370" | sudo tee /etc/hostname
	I0210 11:28:29.198000  147603 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-971370
	
	I0210 11:28:29.198037  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:29.200573  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.200854  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:29.200873  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.201047  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:29.201254  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.201429  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.201587  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:29.201740  147603 main.go:141] libmachine: Using SSH client type: native
	I0210 11:28:29.201902  147603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0210 11:28:29.201917  147603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-971370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-971370/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-971370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:28:29.307033  147603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:28:29.307070  147603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:28:29.307107  147603 buildroot.go:174] setting up certificates
	I0210 11:28:29.307116  147603 provision.go:84] configureAuth start
	I0210 11:28:29.307126  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetMachineName
	I0210 11:28:29.307400  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetIP
	I0210 11:28:29.309784  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.310126  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:29.310157  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.310324  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:29.312474  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.312786  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:29.312805  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.312927  147603 provision.go:143] copyHostCerts
	I0210 11:28:29.312995  147603 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:28:29.313010  147603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:28:29.313109  147603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:28:29.313229  147603 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:28:29.313242  147603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:28:29.313282  147603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:28:29.313361  147603 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:28:29.313371  147603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:28:29.313406  147603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:28:29.313477  147603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.test-preload-971370 san=[127.0.0.1 192.168.39.60 localhost minikube test-preload-971370]
	I0210 11:28:29.555378  147603 provision.go:177] copyRemoteCerts
	I0210 11:28:29.555440  147603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:28:29.555471  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:29.558148  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.558486  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:29.558512  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.558641  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:29.558828  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.558946  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:29.559039  147603 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa Username:docker}
	I0210 11:28:29.636882  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:28:29.659468  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0210 11:28:29.680597  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:28:29.701951  147603 provision.go:87] duration metric: took 394.817882ms to configureAuth
	I0210 11:28:29.701982  147603 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:28:29.702184  147603 config.go:182] Loaded profile config "test-preload-971370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0210 11:28:29.702277  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:29.704824  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.705190  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:29.705214  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.705376  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:29.705558  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.705685  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.705846  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:29.705972  147603 main.go:141] libmachine: Using SSH client type: native
	I0210 11:28:29.706215  147603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0210 11:28:29.706242  147603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:28:29.915252  147603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:28:29.915279  147603 machine.go:96] duration metric: took 932.002088ms to provisionDockerMachine
	I0210 11:28:29.915292  147603 start.go:293] postStartSetup for "test-preload-971370" (driver="kvm2")
	I0210 11:28:29.915302  147603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:28:29.915323  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:29.915624  147603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:28:29.915661  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:29.918151  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.918497  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:29.918540  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:29.918659  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:29.918815  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:29.918958  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:29.919113  147603 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa Username:docker}
	I0210 11:28:30.000572  147603 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:28:30.004354  147603 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:28:30.004383  147603 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:28:30.004461  147603 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:28:30.004564  147603 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:28:30.004663  147603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:28:30.013581  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:28:30.034814  147603 start.go:296] duration metric: took 119.506586ms for postStartSetup
	I0210 11:28:30.034863  147603 fix.go:56] duration metric: took 17.59975053s for fixHost
	I0210 11:28:30.034892  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:30.038084  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.038389  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:30.038407  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.038604  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:30.038795  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:30.038956  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:30.039095  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:30.039259  147603 main.go:141] libmachine: Using SSH client type: native
	I0210 11:28:30.039462  147603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0210 11:28:30.039477  147603 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:28:30.135538  147603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739186910.109110285
	
	I0210 11:28:30.135566  147603 fix.go:216] guest clock: 1739186910.109110285
	I0210 11:28:30.135576  147603 fix.go:229] Guest: 2025-02-10 11:28:30.109110285 +0000 UTC Remote: 2025-02-10 11:28:30.03486831 +0000 UTC m=+30.937411052 (delta=74.241975ms)
	I0210 11:28:30.135622  147603 fix.go:200] guest clock delta is within tolerance: 74.241975ms
	I0210 11:28:30.135630  147603 start.go:83] releasing machines lock for "test-preload-971370", held for 17.70053081s
	I0210 11:28:30.135654  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:30.135924  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetIP
	I0210 11:28:30.138286  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.138734  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:30.138761  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.138972  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:30.139452  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:30.139635  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:30.139749  147603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:28:30.139793  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:30.139829  147603 ssh_runner.go:195] Run: cat /version.json
	I0210 11:28:30.139858  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:30.142481  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.142647  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.142845  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:30.142872  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.143023  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:30.143067  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:30.143109  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:30.143239  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:30.143267  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:30.143435  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:30.143438  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:30.143632  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:30.143639  147603 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa Username:docker}
	I0210 11:28:30.143801  147603 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa Username:docker}
	I0210 11:28:30.215194  147603 ssh_runner.go:195] Run: systemctl --version
	I0210 11:28:30.245729  147603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:28:30.380791  147603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:28:30.387117  147603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:28:30.387173  147603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:28:30.401541  147603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:28:30.401567  147603 start.go:495] detecting cgroup driver to use...
	I0210 11:28:30.401637  147603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:28:30.416758  147603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:28:30.430748  147603 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:28:30.430801  147603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:28:30.444100  147603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:28:30.456771  147603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:28:30.561809  147603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:28:30.705480  147603 docker.go:233] disabling docker service ...
	I0210 11:28:30.705573  147603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:28:30.719088  147603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:28:30.731569  147603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:28:30.840306  147603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:28:30.949623  147603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:28:30.962856  147603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:28:30.979704  147603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0210 11:28:30.979767  147603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:28:30.989255  147603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:28:30.989322  147603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:28:30.998681  147603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:28:31.008932  147603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:28:31.019088  147603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:28:31.028949  147603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:28:31.038434  147603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:28:31.053827  147603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:28:31.062929  147603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:28:31.071286  147603 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:28:31.071327  147603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:28:31.082662  147603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:28:31.091121  147603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:28:31.197571  147603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:28:31.289193  147603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:28:31.289273  147603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:28:31.293731  147603 start.go:563] Will wait 60s for crictl version
	I0210 11:28:31.293777  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:31.297111  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:28:31.337135  147603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:28:31.337217  147603 ssh_runner.go:195] Run: crio --version
	I0210 11:28:31.364625  147603 ssh_runner.go:195] Run: crio --version
	I0210 11:28:31.393086  147603 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0210 11:28:31.394210  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetIP
	I0210 11:28:31.396661  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:31.396997  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:31.397021  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:31.397256  147603 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 11:28:31.400863  147603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:28:31.412523  147603 kubeadm.go:883] updating cluster {Name:test-preload-971370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-971370 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:28:31.412622  147603 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 11:28:31.412665  147603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:28:31.445300  147603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0210 11:28:31.445365  147603 ssh_runner.go:195] Run: which lz4
	I0210 11:28:31.448974  147603 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:28:31.452588  147603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:28:31.452615  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0210 11:28:32.843738  147603 crio.go:462] duration metric: took 1.394780356s to copy over tarball
	I0210 11:28:32.843819  147603 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:28:35.120140  147603 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276285313s)
	I0210 11:28:35.120177  147603 crio.go:469] duration metric: took 2.276400679s to extract the tarball
	I0210 11:28:35.120188  147603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:28:35.160021  147603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:28:35.198789  147603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0210 11:28:35.198824  147603 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 11:28:35.198909  147603 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:28:35.198921  147603 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 11:28:35.198925  147603 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0210 11:28:35.198990  147603 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 11:28:35.199015  147603 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 11:28:35.199012  147603 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 11:28:35.199048  147603 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0210 11:28:35.198964  147603 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 11:28:35.200673  147603 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 11:28:35.200685  147603 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 11:28:35.200696  147603 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0210 11:28:35.200681  147603 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 11:28:35.200728  147603 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 11:28:35.200742  147603 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 11:28:35.200740  147603 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:28:35.200728  147603 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0210 11:28:35.401068  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0210 11:28:35.416407  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0210 11:28:35.426439  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0210 11:28:35.430957  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0210 11:28:35.435711  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 11:28:35.439880  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0210 11:28:35.447764  147603 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0210 11:28:35.447811  147603 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0210 11:28:35.447852  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:35.481261  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0210 11:28:35.523990  147603 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0210 11:28:35.524028  147603 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 11:28:35.524070  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:35.525106  147603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0210 11:28:35.525147  147603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 11:28:35.525195  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:35.576650  147603 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0210 11:28:35.576693  147603 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0210 11:28:35.576741  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:35.576763  147603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0210 11:28:35.576798  147603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 11:28:35.576842  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:35.579882  147603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0210 11:28:35.579913  147603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 11:28:35.579942  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:35.580005  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 11:28:35.580062  147603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0210 11:28:35.580097  147603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 11:28:35.580109  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 11:28:35.580131  147603 ssh_runner.go:195] Run: which crictl
	I0210 11:28:35.583447  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 11:28:35.589379  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 11:28:35.589386  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 11:28:35.589578  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 11:28:35.689798  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 11:28:35.689806  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 11:28:35.689899  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 11:28:35.698691  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 11:28:35.702240  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 11:28:35.714561  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 11:28:35.714620  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 11:28:35.805886  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 11:28:35.852453  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 11:28:35.853783  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 11:28:35.855128  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 11:28:35.858714  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 11:28:35.858763  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 11:28:35.871491  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 11:28:35.927369  147603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0210 11:28:35.927473  147603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0210 11:28:35.966634  147603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0210 11:28:35.966730  147603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0210 11:28:35.994215  147603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 11:28:36.008561  147603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0210 11:28:36.008584  147603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0210 11:28:36.008564  147603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0210 11:28:36.008596  147603 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0210 11:28:36.008597  147603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0210 11:28:36.008565  147603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0210 11:28:36.008570  147603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0210 11:28:36.008637  147603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0210 11:28:36.008683  147603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 11:28:36.008700  147603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 11:28:36.008702  147603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0210 11:28:36.008719  147603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 11:28:36.048748  147603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0210 11:28:36.048855  147603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 11:28:36.346733  147603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:28:38.878121  147603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.86941953s)
	I0210 11:28:38.878152  147603 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.869441893s)
	I0210 11:28:38.878156  147603 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.869422783s)
	I0210 11:28:38.878176  147603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0210 11:28:38.878176  147603 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.829307958s)
	I0210 11:28:38.878190  147603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0210 11:28:38.878191  147603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0210 11:28:38.878166  147603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0210 11:28:38.878136  147603 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.869426429s)
	I0210 11:28:38.878251  147603 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.531468197s)
	I0210 11:28:38.878262  147603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0210 11:28:38.878227  147603 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0210 11:28:38.878341  147603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0210 11:28:38.878187  147603 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.869438241s)
	I0210 11:28:38.878429  147603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0210 11:28:41.124202  147603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.245828991s)
	I0210 11:28:41.124249  147603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0210 11:28:41.124280  147603 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0210 11:28:41.124332  147603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0210 11:28:41.265984  147603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0210 11:28:41.266041  147603 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 11:28:41.266113  147603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 11:28:41.707453  147603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0210 11:28:41.707516  147603 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 11:28:41.707562  147603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 11:28:42.550875  147603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0210 11:28:42.550926  147603 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 11:28:42.550973  147603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 11:28:43.292382  147603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0210 11:28:43.292436  147603 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 11:28:43.292485  147603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 11:28:43.937402  147603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0210 11:28:43.937462  147603 cache_images.go:123] Successfully loaded all cached images
	I0210 11:28:43.937490  147603 cache_images.go:92] duration metric: took 8.738653013s to LoadCachedImages
	I0210 11:28:43.937507  147603 kubeadm.go:934] updating node { 192.168.39.60 8443 v1.24.4 crio true true} ...
	I0210 11:28:43.937623  147603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-971370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-971370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:28:43.937690  147603 ssh_runner.go:195] Run: crio config
	I0210 11:28:43.981960  147603 cni.go:84] Creating CNI manager for ""
	I0210 11:28:43.981983  147603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:28:43.981992  147603 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:28:43.982012  147603 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-971370 NodeName:test-preload-971370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:28:43.982144  147603 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-971370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:28:43.982210  147603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0210 11:28:43.991533  147603 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:28:43.991583  147603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:28:44.000164  147603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0210 11:28:44.014909  147603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:28:44.029403  147603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0210 11:28:44.044288  147603 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0210 11:28:44.047625  147603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:28:44.058295  147603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:28:44.173315  147603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:28:44.189607  147603 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370 for IP: 192.168.39.60
	I0210 11:28:44.189630  147603 certs.go:194] generating shared ca certs ...
	I0210 11:28:44.189651  147603 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:28:44.189830  147603 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:28:44.189874  147603 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:28:44.189885  147603 certs.go:256] generating profile certs ...
	I0210 11:28:44.189995  147603 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/client.key
	I0210 11:28:44.190078  147603 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/apiserver.key.3a44defe
	I0210 11:28:44.190134  147603 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/proxy-client.key
	I0210 11:28:44.190281  147603 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:28:44.190321  147603 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:28:44.190336  147603 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:28:44.190383  147603 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:28:44.190435  147603 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:28:44.190469  147603 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:28:44.190523  147603 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:28:44.191453  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:28:44.231547  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:28:44.264645  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:28:44.298565  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:28:44.327374  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0210 11:28:44.356216  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:28:44.400691  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:28:44.425572  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 11:28:44.447459  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:28:44.468804  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:28:44.489750  147603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:28:44.511821  147603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:28:44.526689  147603 ssh_runner.go:195] Run: openssl version
	I0210 11:28:44.531761  147603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:28:44.541549  147603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:28:44.545501  147603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:28:44.545547  147603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:28:44.550717  147603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:28:44.560523  147603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:28:44.570462  147603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:28:44.574334  147603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:28:44.574382  147603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:28:44.579542  147603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:28:44.589323  147603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:28:44.599195  147603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:28:44.603221  147603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:28:44.603266  147603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:28:44.608364  147603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:28:44.618006  147603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:28:44.621911  147603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 11:28:44.627426  147603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 11:28:44.632683  147603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 11:28:44.638215  147603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 11:28:44.643425  147603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 11:28:44.648628  147603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 11:28:44.653906  147603 kubeadm.go:392] StartCluster: {Name:test-preload-971370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-971370 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:28:44.654019  147603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:28:44.654068  147603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:28:44.689793  147603 cri.go:89] found id: ""
	I0210 11:28:44.689867  147603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:28:44.699296  147603 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 11:28:44.699325  147603 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 11:28:44.699380  147603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 11:28:44.708229  147603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:28:44.708700  147603 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-971370" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:28:44.708811  147603 kubeconfig.go:62] /home/jenkins/minikube-integration/20385-109271/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-971370" cluster setting kubeconfig missing "test-preload-971370" context setting]
	I0210 11:28:44.709075  147603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:28:44.709625  147603 kapi.go:59] client config for test-preload-971370: &rest.Config{Host:"https://192.168.39.60:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/client.crt", KeyFile:"/home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/client.key", CAFile:"/home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 11:28:44.710067  147603 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 11:28:44.710084  147603 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 11:28:44.710088  147603 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 11:28:44.710092  147603 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 11:28:44.710439  147603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 11:28:44.719289  147603 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.60
	I0210 11:28:44.719319  147603 kubeadm.go:1160] stopping kube-system containers ...
	I0210 11:28:44.719333  147603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 11:28:44.719374  147603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:28:44.752884  147603 cri.go:89] found id: ""
	I0210 11:28:44.752975  147603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 11:28:44.768946  147603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:28:44.777888  147603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:28:44.777925  147603 kubeadm.go:157] found existing configuration files:
	
	I0210 11:28:44.777963  147603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:28:44.786329  147603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:28:44.786371  147603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:28:44.794754  147603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:28:44.802601  147603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:28:44.802645  147603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:28:44.811108  147603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:28:44.819156  147603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:28:44.819211  147603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:28:44.827686  147603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:28:44.835800  147603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:28:44.835844  147603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:28:44.845023  147603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:28:44.854317  147603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:28:44.944308  147603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:28:45.396425  147603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:28:45.655768  147603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:28:45.732983  147603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:28:45.837662  147603 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:28:45.837762  147603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:28:46.338269  147603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:28:46.838625  147603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:28:46.856575  147603 api_server.go:72] duration metric: took 1.018914651s to wait for apiserver process to appear ...
	I0210 11:28:46.856605  147603 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:28:46.856628  147603 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0210 11:28:46.857198  147603 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0210 11:28:47.356930  147603 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0210 11:28:50.709140  147603 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:28:50.709171  147603 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:28:50.709198  147603 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0210 11:28:50.759827  147603 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:28:50.759867  147603 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:28:50.857173  147603 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0210 11:28:50.866550  147603 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:28:50.866591  147603 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:28:51.357326  147603 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0210 11:28:51.362034  147603 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:28:51.362062  147603 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:28:51.856729  147603 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0210 11:28:51.862766  147603 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0210 11:28:51.869650  147603 api_server.go:141] control plane version: v1.24.4
	I0210 11:28:51.869684  147603 api_server.go:131] duration metric: took 5.013070898s to wait for apiserver health ...
	I0210 11:28:51.869696  147603 cni.go:84] Creating CNI manager for ""
	I0210 11:28:51.869705  147603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:28:51.871486  147603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 11:28:51.872558  147603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 11:28:51.884427  147603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 11:28:51.902113  147603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:28:51.907508  147603 system_pods.go:59] 7 kube-system pods found
	I0210 11:28:51.907551  147603 system_pods.go:61] "coredns-6d4b75cb6d-h5bpg" [beae22b7-5816-45ce-a8a7-94ce57abcbe8] Running
	I0210 11:28:51.907560  147603 system_pods.go:61] "etcd-test-preload-971370" [02f81e22-aeac-408a-b9cd-8b2a9b307f5a] Running
	I0210 11:28:51.907566  147603 system_pods.go:61] "kube-apiserver-test-preload-971370" [524f41d1-14e3-44b4-b222-6a0babf482df] Running
	I0210 11:28:51.907572  147603 system_pods.go:61] "kube-controller-manager-test-preload-971370" [a4e54d5e-6404-4e47-a9d4-6fdc0eaed490] Running
	I0210 11:28:51.907581  147603 system_pods.go:61] "kube-proxy-s595q" [52a93f6c-4fef-4563-8aff-387c04515457] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 11:28:51.907593  147603 system_pods.go:61] "kube-scheduler-test-preload-971370" [16b938a5-d89e-4508-a36d-2cd201a36ce4] Running
	I0210 11:28:51.907604  147603 system_pods.go:61] "storage-provisioner" [baa22e7e-9325-461d-9bfb-0bce77dd1108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:28:51.907614  147603 system_pods.go:74] duration metric: took 5.474384ms to wait for pod list to return data ...
	I0210 11:28:51.907625  147603 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:28:51.911625  147603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:28:51.911649  147603 node_conditions.go:123] node cpu capacity is 2
	I0210 11:28:51.911660  147603 node_conditions.go:105] duration metric: took 4.030491ms to run NodePressure ...
	I0210 11:28:51.911681  147603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:28:52.083420  147603 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 11:28:52.085905  147603 retry.go:31] will retry after 276.265329ms: kubelet not initialised
	I0210 11:28:52.365480  147603 retry.go:31] will retry after 546.102727ms: kubelet not initialised
	I0210 11:28:52.919137  147603 retry.go:31] will retry after 368.106732ms: kubelet not initialised
	I0210 11:28:53.291074  147603 kubeadm.go:739] kubelet initialised
	I0210 11:28:53.291106  147603 kubeadm.go:740] duration metric: took 1.207646848s waiting for restarted kubelet to initialise ...
	I0210 11:28:53.291120  147603 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:28:53.294474  147603 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-h5bpg" in "kube-system" namespace to be "Ready" ...
	I0210 11:28:53.300024  147603 pod_ready.go:98] node "test-preload-971370" hosting pod "coredns-6d4b75cb6d-h5bpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.300059  147603 pod_ready.go:82] duration metric: took 5.555521ms for pod "coredns-6d4b75cb6d-h5bpg" in "kube-system" namespace to be "Ready" ...
	E0210 11:28:53.300071  147603 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-971370" hosting pod "coredns-6d4b75cb6d-h5bpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.300083  147603 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:28:53.305782  147603 pod_ready.go:98] node "test-preload-971370" hosting pod "etcd-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.305820  147603 pod_ready.go:82] duration metric: took 5.718909ms for pod "etcd-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	E0210 11:28:53.305832  147603 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-971370" hosting pod "etcd-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.305845  147603 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:28:53.311343  147603 pod_ready.go:98] node "test-preload-971370" hosting pod "kube-apiserver-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.311368  147603 pod_ready.go:82] duration metric: took 5.508309ms for pod "kube-apiserver-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	E0210 11:28:53.311381  147603 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-971370" hosting pod "kube-apiserver-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.311389  147603 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:28:53.315309  147603 pod_ready.go:98] node "test-preload-971370" hosting pod "kube-controller-manager-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.315337  147603 pod_ready.go:82] duration metric: took 3.935095ms for pod "kube-controller-manager-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	E0210 11:28:53.315349  147603 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-971370" hosting pod "kube-controller-manager-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.315358  147603 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-s595q" in "kube-system" namespace to be "Ready" ...
	I0210 11:28:53.690091  147603 pod_ready.go:98] node "test-preload-971370" hosting pod "kube-proxy-s595q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.690138  147603 pod_ready.go:82] duration metric: took 374.764188ms for pod "kube-proxy-s595q" in "kube-system" namespace to be "Ready" ...
	E0210 11:28:53.690150  147603 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-971370" hosting pod "kube-proxy-s595q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:53.690160  147603 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:28:54.090541  147603 pod_ready.go:98] node "test-preload-971370" hosting pod "kube-scheduler-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:54.090570  147603 pod_ready.go:82] duration metric: took 400.402283ms for pod "kube-scheduler-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	E0210 11:28:54.090582  147603 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-971370" hosting pod "kube-scheduler-test-preload-971370" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:54.090595  147603 pod_ready.go:39] duration metric: took 799.458109ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:28:54.090616  147603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 11:28:54.103889  147603 ops.go:34] apiserver oom_adj: -16
	I0210 11:28:54.103917  147603 kubeadm.go:597] duration metric: took 9.404583915s to restartPrimaryControlPlane
	I0210 11:28:54.103926  147603 kubeadm.go:394] duration metric: took 9.4500285s to StartCluster
	I0210 11:28:54.103948  147603 settings.go:142] acquiring lock: {Name:mk1369a4cca9eaf53282144d4cb555c048db8e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:28:54.104029  147603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:28:54.104882  147603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:28:54.105165  147603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:28:54.105207  147603 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 11:28:54.105315  147603 addons.go:69] Setting storage-provisioner=true in profile "test-preload-971370"
	I0210 11:28:54.105331  147603 addons.go:69] Setting default-storageclass=true in profile "test-preload-971370"
	I0210 11:28:54.105376  147603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-971370"
	I0210 11:28:54.105400  147603 config.go:182] Loaded profile config "test-preload-971370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0210 11:28:54.105339  147603 addons.go:238] Setting addon storage-provisioner=true in "test-preload-971370"
	W0210 11:28:54.105460  147603 addons.go:247] addon storage-provisioner should already be in state true
	I0210 11:28:54.105493  147603 host.go:66] Checking if "test-preload-971370" exists ...
	I0210 11:28:54.105783  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:28:54.105834  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:28:54.105900  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:28:54.105947  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:28:54.106833  147603 out.go:177] * Verifying Kubernetes components...
	I0210 11:28:54.108086  147603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:28:54.121527  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0210 11:28:54.121599  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37059
	I0210 11:28:54.122041  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:28:54.122082  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:28:54.122612  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:28:54.122638  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:28:54.122716  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:28:54.122741  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:28:54.123069  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:28:54.123123  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:28:54.123296  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetState
	I0210 11:28:54.123760  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:28:54.123820  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:28:54.125777  147603 kapi.go:59] client config for test-preload-971370: &rest.Config{Host:"https://192.168.39.60:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/client.crt", KeyFile:"/home/jenkins/minikube-integration/20385-109271/.minikube/profiles/test-preload-971370/client.key", CAFile:"/home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 11:28:54.126169  147603 addons.go:238] Setting addon default-storageclass=true in "test-preload-971370"
	W0210 11:28:54.126191  147603 addons.go:247] addon default-storageclass should already be in state true
	I0210 11:28:54.126220  147603 host.go:66] Checking if "test-preload-971370" exists ...
	I0210 11:28:54.126596  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:28:54.126639  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:28:54.138208  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I0210 11:28:54.138592  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:28:54.139113  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:28:54.139144  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:28:54.139444  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:28:54.139670  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetState
	I0210 11:28:54.141199  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:54.142835  147603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:28:54.143542  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0210 11:28:54.143940  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:28:54.144202  147603 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:28:54.144221  147603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 11:28:54.144240  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:54.144793  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:28:54.144814  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:28:54.145273  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:28:54.145850  147603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:28:54.145897  147603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:28:54.147274  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:54.147642  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:54.147673  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:54.147780  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:54.147963  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:54.148126  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:54.148266  147603 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa Username:docker}
	I0210 11:28:54.182513  147603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I0210 11:28:54.182998  147603 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:28:54.183600  147603 main.go:141] libmachine: Using API Version  1
	I0210 11:28:54.183627  147603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:28:54.183979  147603 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:28:54.184193  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetState
	I0210 11:28:54.185745  147603 main.go:141] libmachine: (test-preload-971370) Calling .DriverName
	I0210 11:28:54.185998  147603 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 11:28:54.186017  147603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 11:28:54.186037  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHHostname
	I0210 11:28:54.188426  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:54.188765  147603 main.go:141] libmachine: (test-preload-971370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:67:e5", ip: ""} in network mk-test-preload-971370: {Iface:virbr1 ExpiryTime:2025-02-10 12:28:23 +0000 UTC Type:0 Mac:52:54:00:ca:67:e5 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-971370 Clientid:01:52:54:00:ca:67:e5}
	I0210 11:28:54.188807  147603 main.go:141] libmachine: (test-preload-971370) DBG | domain test-preload-971370 has defined IP address 192.168.39.60 and MAC address 52:54:00:ca:67:e5 in network mk-test-preload-971370
	I0210 11:28:54.188914  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHPort
	I0210 11:28:54.189113  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHKeyPath
	I0210 11:28:54.189264  147603 main.go:141] libmachine: (test-preload-971370) Calling .GetSSHUsername
	I0210 11:28:54.189405  147603 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/test-preload-971370/id_rsa Username:docker}
	I0210 11:28:54.264631  147603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:28:54.281494  147603 node_ready.go:35] waiting up to 6m0s for node "test-preload-971370" to be "Ready" ...
	I0210 11:28:54.344646  147603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:28:54.372876  147603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:28:55.327408  147603 main.go:141] libmachine: Making call to close driver server
	I0210 11:28:55.327438  147603 main.go:141] libmachine: (test-preload-971370) Calling .Close
	I0210 11:28:55.327445  147603 main.go:141] libmachine: Making call to close driver server
	I0210 11:28:55.327456  147603 main.go:141] libmachine: (test-preload-971370) Calling .Close
	I0210 11:28:55.327768  147603 main.go:141] libmachine: (test-preload-971370) DBG | Closing plugin on server side
	I0210 11:28:55.327827  147603 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:28:55.327840  147603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:28:55.327841  147603 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:28:55.327851  147603 main.go:141] libmachine: Making call to close driver server
	I0210 11:28:55.327855  147603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:28:55.327859  147603 main.go:141] libmachine: (test-preload-971370) Calling .Close
	I0210 11:28:55.327874  147603 main.go:141] libmachine: (test-preload-971370) DBG | Closing plugin on server side
	I0210 11:28:55.327863  147603 main.go:141] libmachine: Making call to close driver server
	I0210 11:28:55.327926  147603 main.go:141] libmachine: (test-preload-971370) Calling .Close
	I0210 11:28:55.328089  147603 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:28:55.328113  147603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:28:55.328218  147603 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:28:55.328236  147603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:28:55.334427  147603 main.go:141] libmachine: Making call to close driver server
	I0210 11:28:55.334451  147603 main.go:141] libmachine: (test-preload-971370) Calling .Close
	I0210 11:28:55.334678  147603 main.go:141] libmachine: (test-preload-971370) DBG | Closing plugin on server side
	I0210 11:28:55.334707  147603 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:28:55.334716  147603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:28:55.336364  147603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0210 11:28:55.337456  147603 addons.go:514] duration metric: took 1.232259485s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0210 11:28:56.284808  147603 node_ready.go:53] node "test-preload-971370" has status "Ready":"False"
	I0210 11:28:58.285268  147603 node_ready.go:53] node "test-preload-971370" has status "Ready":"False"
	I0210 11:29:00.785451  147603 node_ready.go:53] node "test-preload-971370" has status "Ready":"False"
	I0210 11:29:01.284727  147603 node_ready.go:49] node "test-preload-971370" has status "Ready":"True"
	I0210 11:29:01.284756  147603 node_ready.go:38] duration metric: took 7.003229093s for node "test-preload-971370" to be "Ready" ...
	I0210 11:29:01.284769  147603 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:29:01.287856  147603 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-h5bpg" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:01.293182  147603 pod_ready.go:93] pod "coredns-6d4b75cb6d-h5bpg" in "kube-system" namespace has status "Ready":"True"
	I0210 11:29:01.293201  147603 pod_ready.go:82] duration metric: took 5.319526ms for pod "coredns-6d4b75cb6d-h5bpg" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:01.293212  147603 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:03.298879  147603 pod_ready.go:103] pod "etcd-test-preload-971370" in "kube-system" namespace has status "Ready":"False"
	I0210 11:29:05.302088  147603 pod_ready.go:103] pod "etcd-test-preload-971370" in "kube-system" namespace has status "Ready":"False"
	I0210 11:29:05.798215  147603 pod_ready.go:93] pod "etcd-test-preload-971370" in "kube-system" namespace has status "Ready":"True"
	I0210 11:29:05.798241  147603 pod_ready.go:82] duration metric: took 4.505022335s for pod "etcd-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.798252  147603 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.802138  147603 pod_ready.go:93] pod "kube-apiserver-test-preload-971370" in "kube-system" namespace has status "Ready":"True"
	I0210 11:29:05.802156  147603 pod_ready.go:82] duration metric: took 3.897449ms for pod "kube-apiserver-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.802165  147603 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.805417  147603 pod_ready.go:93] pod "kube-controller-manager-test-preload-971370" in "kube-system" namespace has status "Ready":"True"
	I0210 11:29:05.805448  147603 pod_ready.go:82] duration metric: took 3.275211ms for pod "kube-controller-manager-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.805465  147603 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s595q" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.808597  147603 pod_ready.go:93] pod "kube-proxy-s595q" in "kube-system" namespace has status "Ready":"True"
	I0210 11:29:05.808620  147603 pod_ready.go:82] duration metric: took 3.136786ms for pod "kube-proxy-s595q" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.808632  147603 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.811556  147603 pod_ready.go:93] pod "kube-scheduler-test-preload-971370" in "kube-system" namespace has status "Ready":"True"
	I0210 11:29:05.811572  147603 pod_ready.go:82] duration metric: took 2.930876ms for pod "kube-scheduler-test-preload-971370" in "kube-system" namespace to be "Ready" ...
	I0210 11:29:05.811596  147603 pod_ready.go:39] duration metric: took 4.526798287s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:29:05.811614  147603 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:29:05.811663  147603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:29:05.825529  147603 api_server.go:72] duration metric: took 11.720326248s to wait for apiserver process to appear ...
	I0210 11:29:05.825552  147603 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:29:05.825574  147603 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0210 11:29:05.830431  147603 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0210 11:29:05.831153  147603 api_server.go:141] control plane version: v1.24.4
	I0210 11:29:05.831168  147603 api_server.go:131] duration metric: took 5.609081ms to wait for apiserver health ...
	I0210 11:29:05.831174  147603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:29:05.999584  147603 system_pods.go:59] 7 kube-system pods found
	I0210 11:29:05.999616  147603 system_pods.go:61] "coredns-6d4b75cb6d-h5bpg" [beae22b7-5816-45ce-a8a7-94ce57abcbe8] Running
	I0210 11:29:05.999621  147603 system_pods.go:61] "etcd-test-preload-971370" [02f81e22-aeac-408a-b9cd-8b2a9b307f5a] Running
	I0210 11:29:05.999625  147603 system_pods.go:61] "kube-apiserver-test-preload-971370" [524f41d1-14e3-44b4-b222-6a0babf482df] Running
	I0210 11:29:05.999628  147603 system_pods.go:61] "kube-controller-manager-test-preload-971370" [a4e54d5e-6404-4e47-a9d4-6fdc0eaed490] Running
	I0210 11:29:05.999632  147603 system_pods.go:61] "kube-proxy-s595q" [52a93f6c-4fef-4563-8aff-387c04515457] Running
	I0210 11:29:05.999635  147603 system_pods.go:61] "kube-scheduler-test-preload-971370" [16b938a5-d89e-4508-a36d-2cd201a36ce4] Running
	I0210 11:29:05.999642  147603 system_pods.go:61] "storage-provisioner" [baa22e7e-9325-461d-9bfb-0bce77dd1108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:29:05.999650  147603 system_pods.go:74] duration metric: took 168.469546ms to wait for pod list to return data ...
	I0210 11:29:05.999661  147603 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:29:06.197021  147603 default_sa.go:45] found service account: "default"
	I0210 11:29:06.197050  147603 default_sa.go:55] duration metric: took 197.382981ms for default service account to be created ...
	I0210 11:29:06.197065  147603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 11:29:06.398447  147603 system_pods.go:86] 7 kube-system pods found
	I0210 11:29:06.398481  147603 system_pods.go:89] "coredns-6d4b75cb6d-h5bpg" [beae22b7-5816-45ce-a8a7-94ce57abcbe8] Running
	I0210 11:29:06.398488  147603 system_pods.go:89] "etcd-test-preload-971370" [02f81e22-aeac-408a-b9cd-8b2a9b307f5a] Running
	I0210 11:29:06.398491  147603 system_pods.go:89] "kube-apiserver-test-preload-971370" [524f41d1-14e3-44b4-b222-6a0babf482df] Running
	I0210 11:29:06.398495  147603 system_pods.go:89] "kube-controller-manager-test-preload-971370" [a4e54d5e-6404-4e47-a9d4-6fdc0eaed490] Running
	I0210 11:29:06.398498  147603 system_pods.go:89] "kube-proxy-s595q" [52a93f6c-4fef-4563-8aff-387c04515457] Running
	I0210 11:29:06.398502  147603 system_pods.go:89] "kube-scheduler-test-preload-971370" [16b938a5-d89e-4508-a36d-2cd201a36ce4] Running
	I0210 11:29:06.398507  147603 system_pods.go:89] "storage-provisioner" [baa22e7e-9325-461d-9bfb-0bce77dd1108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:29:06.398516  147603 system_pods.go:126] duration metric: took 201.445087ms to wait for k8s-apps to be running ...
	I0210 11:29:06.398530  147603 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 11:29:06.398579  147603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:29:06.412411  147603 system_svc.go:56] duration metric: took 13.869192ms WaitForService to wait for kubelet
	I0210 11:29:06.412448  147603 kubeadm.go:582] duration metric: took 12.30724555s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:29:06.412474  147603 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:29:06.596530  147603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:29:06.596556  147603 node_conditions.go:123] node cpu capacity is 2
	I0210 11:29:06.596568  147603 node_conditions.go:105] duration metric: took 184.089899ms to run NodePressure ...
	I0210 11:29:06.596579  147603 start.go:241] waiting for startup goroutines ...
	I0210 11:29:06.596585  147603 start.go:246] waiting for cluster config update ...
	I0210 11:29:06.596594  147603 start.go:255] writing updated cluster config ...
	I0210 11:29:06.596854  147603 ssh_runner.go:195] Run: rm -f paused
	I0210 11:29:06.643128  147603 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0210 11:29:06.644990  147603 out.go:201] 
	W0210 11:29:06.646224  147603 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0210 11:29:06.647503  147603 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0210 11:29:06.648741  147603 out.go:177] * Done! kubectl is now configured to use "test-preload-971370" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.512177678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739186947512155526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=732b5efc-629e-42be-a485-ab2c051526c7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.512551278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78cd24df-7633-49ec-abdf-533d43ce0eee name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.512612569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78cd24df-7633-49ec-abdf-533d43ce0eee name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.512851044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a943e8d66c08fe314add05d0b391b38c0b0ba5771b883f6b117d578971b1f44,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739186946891749494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ea974631941f1267a657708744ea89dc1e17cdc2052ce845bc37326636c414,PodSandboxId:222038da1da116be3a10e3dd89679a748fbf790b50aa0189a2f572376eafc37e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739186938875857985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5bpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beae22b7-5816-45ce-a8a7-94ce57abcbe8,},Annotations:map[string]string{io.kubernetes.container.hash: 51ad752c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739186932922628325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cb1cc3bf897d617c9a1d6d33319bf9c4389b4b980f23fd4a03df79743043e3,PodSandboxId:08ec41afd5f52ece808d3be07fde17a5ac9e4c95f1df85c2d9b11fdafebf079d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739186932702355646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s595q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a93f6c-4fef-4
563-8aff-387c04515457,},Annotations:map[string]string{io.kubernetes.container.hash: 30799831,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ee8af850113e55f1038d8aa1c19055c54c0c42242eb31fc5a4f800ecec8544,PodSandboxId:b2e75d1909eeaf64c28dbba463fb7c71b3512ab6c2bf9504280f457f4ceab2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739186926561272944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8d41f77f1bac8c559373
1b5e515893,},Annotations:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072d6e2ad42e25f81f70150c0a3203e3e406f9f863f2719a25866f3188e77033,PodSandboxId:c54af53c1be912c53fd7f51ccf456814c4e1f90e4352687016258080eab61efa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739186926530977621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 398a8a9b3cdf0f57b7987fcc6ebc917b,
},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8b1666802b256526304b04ed55585fdcb811e1f01b273c6ba703f83d6173bf,PodSandboxId:f60c2ed8d2e0457a9f6bccf315c466778fde95ec66d39a2c508a948ba2c7ec8f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739186926483343290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2bfedd54d9ec4b2b1592c048a0f817,},Annotations:map[string]string{io.kubern
etes.container.hash: 47c751d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9db4c6b8ffbccfc537e8df2afa0df425db1dd305e9c767c5447442f529d3e2,PodSandboxId:43b6d02c290eee0cf796a1eeb4be507b6103772b2a959e8ef55b458e5ee4c3b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739186926474641078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3048ddebc5bb2e1fd82792a3ddadca7,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78cd24df-7633-49ec-abdf-533d43ce0eee name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.546250439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8fa5b4b-d51b-40ec-999c-32b5d25398eb name=/runtime.v1.RuntimeService/Version
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.546334633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8fa5b4b-d51b-40ec-999c-32b5d25398eb name=/runtime.v1.RuntimeService/Version
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.547289931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b66db4d-8349-48e9-aa17-151a9cb4e23b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.547868197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739186947547821360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b66db4d-8349-48e9-aa17-151a9cb4e23b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.548389165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bca53d97-da73-4629-af1f-433a099ff1e2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.548450574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bca53d97-da73-4629-af1f-433a099ff1e2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.548846624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a943e8d66c08fe314add05d0b391b38c0b0ba5771b883f6b117d578971b1f44,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739186946891749494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ea974631941f1267a657708744ea89dc1e17cdc2052ce845bc37326636c414,PodSandboxId:222038da1da116be3a10e3dd89679a748fbf790b50aa0189a2f572376eafc37e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739186938875857985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5bpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beae22b7-5816-45ce-a8a7-94ce57abcbe8,},Annotations:map[string]string{io.kubernetes.container.hash: 51ad752c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739186932922628325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cb1cc3bf897d617c9a1d6d33319bf9c4389b4b980f23fd4a03df79743043e3,PodSandboxId:08ec41afd5f52ece808d3be07fde17a5ac9e4c95f1df85c2d9b11fdafebf079d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739186932702355646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s595q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a93f6c-4fef-4
563-8aff-387c04515457,},Annotations:map[string]string{io.kubernetes.container.hash: 30799831,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ee8af850113e55f1038d8aa1c19055c54c0c42242eb31fc5a4f800ecec8544,PodSandboxId:b2e75d1909eeaf64c28dbba463fb7c71b3512ab6c2bf9504280f457f4ceab2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739186926561272944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8d41f77f1bac8c559373
1b5e515893,},Annotations:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072d6e2ad42e25f81f70150c0a3203e3e406f9f863f2719a25866f3188e77033,PodSandboxId:c54af53c1be912c53fd7f51ccf456814c4e1f90e4352687016258080eab61efa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739186926530977621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 398a8a9b3cdf0f57b7987fcc6ebc917b,
},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8b1666802b256526304b04ed55585fdcb811e1f01b273c6ba703f83d6173bf,PodSandboxId:f60c2ed8d2e0457a9f6bccf315c466778fde95ec66d39a2c508a948ba2c7ec8f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739186926483343290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2bfedd54d9ec4b2b1592c048a0f817,},Annotations:map[string]string{io.kubern
etes.container.hash: 47c751d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9db4c6b8ffbccfc537e8df2afa0df425db1dd305e9c767c5447442f529d3e2,PodSandboxId:43b6d02c290eee0cf796a1eeb4be507b6103772b2a959e8ef55b458e5ee4c3b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739186926474641078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3048ddebc5bb2e1fd82792a3ddadca7,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bca53d97-da73-4629-af1f-433a099ff1e2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.581842065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9c64416-e644-4128-9da7-1cb55e172bef name=/runtime.v1.RuntimeService/Version
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.581918212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9c64416-e644-4128-9da7-1cb55e172bef name=/runtime.v1.RuntimeService/Version
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.583088985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0ef0d69-8fdb-4f55-aec8-ce3b2dbd2087 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.583512150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739186947583491989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0ef0d69-8fdb-4f55-aec8-ce3b2dbd2087 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.584100109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a623c62-c359-4692-8801-1fb90a25d742 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.584166207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a623c62-c359-4692-8801-1fb90a25d742 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.584332654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a943e8d66c08fe314add05d0b391b38c0b0ba5771b883f6b117d578971b1f44,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739186946891749494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ea974631941f1267a657708744ea89dc1e17cdc2052ce845bc37326636c414,PodSandboxId:222038da1da116be3a10e3dd89679a748fbf790b50aa0189a2f572376eafc37e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739186938875857985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5bpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beae22b7-5816-45ce-a8a7-94ce57abcbe8,},Annotations:map[string]string{io.kubernetes.container.hash: 51ad752c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739186932922628325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cb1cc3bf897d617c9a1d6d33319bf9c4389b4b980f23fd4a03df79743043e3,PodSandboxId:08ec41afd5f52ece808d3be07fde17a5ac9e4c95f1df85c2d9b11fdafebf079d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739186932702355646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s595q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a93f6c-4fef-4
563-8aff-387c04515457,},Annotations:map[string]string{io.kubernetes.container.hash: 30799831,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ee8af850113e55f1038d8aa1c19055c54c0c42242eb31fc5a4f800ecec8544,PodSandboxId:b2e75d1909eeaf64c28dbba463fb7c71b3512ab6c2bf9504280f457f4ceab2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739186926561272944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8d41f77f1bac8c559373
1b5e515893,},Annotations:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072d6e2ad42e25f81f70150c0a3203e3e406f9f863f2719a25866f3188e77033,PodSandboxId:c54af53c1be912c53fd7f51ccf456814c4e1f90e4352687016258080eab61efa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739186926530977621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 398a8a9b3cdf0f57b7987fcc6ebc917b,
},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8b1666802b256526304b04ed55585fdcb811e1f01b273c6ba703f83d6173bf,PodSandboxId:f60c2ed8d2e0457a9f6bccf315c466778fde95ec66d39a2c508a948ba2c7ec8f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739186926483343290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2bfedd54d9ec4b2b1592c048a0f817,},Annotations:map[string]string{io.kubern
etes.container.hash: 47c751d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9db4c6b8ffbccfc537e8df2afa0df425db1dd305e9c767c5447442f529d3e2,PodSandboxId:43b6d02c290eee0cf796a1eeb4be507b6103772b2a959e8ef55b458e5ee4c3b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739186926474641078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3048ddebc5bb2e1fd82792a3ddadca7,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a623c62-c359-4692-8801-1fb90a25d742 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.613407904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2683981-aa3a-4f20-8114-4b0c95dddda6 name=/runtime.v1.RuntimeService/Version
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.613514293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2683981-aa3a-4f20-8114-4b0c95dddda6 name=/runtime.v1.RuntimeService/Version
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.614789956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3593033-1cf1-4358-8ef6-682ab39d2d87 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.615271763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739186947615250133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3593033-1cf1-4358-8ef6-682ab39d2d87 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.615752983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1d10923-ba23-4dfe-9c7d-76096cbe262f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.615815459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1d10923-ba23-4dfe-9c7d-76096cbe262f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:29:07 test-preload-971370 crio[675]: time="2025-02-10 11:29:07.616027714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a943e8d66c08fe314add05d0b391b38c0b0ba5771b883f6b117d578971b1f44,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739186946891749494,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ea974631941f1267a657708744ea89dc1e17cdc2052ce845bc37326636c414,PodSandboxId:222038da1da116be3a10e3dd89679a748fbf790b50aa0189a2f572376eafc37e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739186938875857985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5bpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beae22b7-5816-45ce-a8a7-94ce57abcbe8,},Annotations:map[string]string{io.kubernetes.container.hash: 51ad752c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404,PodSandboxId:4c23c96700881d052ea6b433620972e7cc513dfb02c124b62e211fa779397034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739186932922628325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: baa22e7e-9325-461d-9bfb-0bce77dd1108,},Annotations:map[string]string{io.kubernetes.container.hash: a403f14e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cb1cc3bf897d617c9a1d6d33319bf9c4389b4b980f23fd4a03df79743043e3,PodSandboxId:08ec41afd5f52ece808d3be07fde17a5ac9e4c95f1df85c2d9b11fdafebf079d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739186932702355646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s595q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a93f6c-4fef-4
563-8aff-387c04515457,},Annotations:map[string]string{io.kubernetes.container.hash: 30799831,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ee8af850113e55f1038d8aa1c19055c54c0c42242eb31fc5a4f800ecec8544,PodSandboxId:b2e75d1909eeaf64c28dbba463fb7c71b3512ab6c2bf9504280f457f4ceab2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739186926561272944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8d41f77f1bac8c559373
1b5e515893,},Annotations:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072d6e2ad42e25f81f70150c0a3203e3e406f9f863f2719a25866f3188e77033,PodSandboxId:c54af53c1be912c53fd7f51ccf456814c4e1f90e4352687016258080eab61efa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739186926530977621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 398a8a9b3cdf0f57b7987fcc6ebc917b,
},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8b1666802b256526304b04ed55585fdcb811e1f01b273c6ba703f83d6173bf,PodSandboxId:f60c2ed8d2e0457a9f6bccf315c466778fde95ec66d39a2c508a948ba2c7ec8f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739186926483343290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2bfedd54d9ec4b2b1592c048a0f817,},Annotations:map[string]string{io.kubern
etes.container.hash: 47c751d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9db4c6b8ffbccfc537e8df2afa0df425db1dd305e9c767c5447442f529d3e2,PodSandboxId:43b6d02c290eee0cf796a1eeb4be507b6103772b2a959e8ef55b458e5ee4c3b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739186926474641078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-971370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3048ddebc5bb2e1fd82792a3ddadca7,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1d10923-ba23-4dfe-9c7d-76096cbe262f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	5a943e8d66c08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   Less than a second ago   Running             storage-provisioner       3                   4c23c96700881       storage-provisioner
	c8ea974631941       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago            Running             coredns                   1                   222038da1da11       coredns-6d4b75cb6d-h5bpg
	107fe86d8c903       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago           Exited              storage-provisioner       2                   4c23c96700881       storage-provisioner
	20cb1cc3bf897       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago           Running             kube-proxy                1                   08ec41afd5f52       kube-proxy-s595q
	31ee8af850113       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago           Running             kube-apiserver            1                   b2e75d1909eea       kube-apiserver-test-preload-971370
	072d6e2ad42e2       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago           Running             kube-scheduler            1                   c54af53c1be91       kube-scheduler-test-preload-971370
	ad8b1666802b2       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago           Running             etcd                      1                   f60c2ed8d2e04       etcd-test-preload-971370
	2e9db4c6b8ffb       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago           Running             kube-controller-manager   1                   43b6d02c290ee       kube-controller-manager-test-preload-971370
	
	
	==> coredns [c8ea974631941f1267a657708744ea89dc1e17cdc2052ce845bc37326636c414] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:50565 - 38865 "HINFO IN 3553973301061955252.8868007353743411125. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008260621s
	
	
	==> describe nodes <==
	Name:               test-preload-971370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-971370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=test-preload-971370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T11_27_31_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-971370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:29:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:29:01 +0000   Mon, 10 Feb 2025 11:27:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:29:01 +0000   Mon, 10 Feb 2025 11:27:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:29:01 +0000   Mon, 10 Feb 2025 11:27:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:29:01 +0000   Mon, 10 Feb 2025 11:29:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    test-preload-971370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c03c7236fad3465ebf4fbfbb1204ba5d
	  System UUID:                c03c7236-fad3-465e-bf4f-bfbb1204ba5d
	  Boot ID:                    da132a03-dafa-47a7-94ac-7cccdf866277
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-h5bpg                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     83s
	  kube-system                 etcd-test-preload-971370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         97s
	  kube-system                 kube-apiserver-test-preload-971370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-test-preload-971370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-s595q                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-test-preload-971370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 81s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  104s (x4 over 104s)  kubelet          Node test-preload-971370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x4 over 104s)  kubelet          Node test-preload-971370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x4 over 104s)  kubelet          Node test-preload-971370 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node test-preload-971370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node test-preload-971370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node test-preload-971370 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                  kubelet          Node test-preload-971370 status is now: NodeReady
	  Normal  RegisteredNode           83s                  node-controller  Node test-preload-971370 event: Registered Node test-preload-971370 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-971370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-971370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-971370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-971370 event: Registered Node test-preload-971370 in Controller
	
	
	==> dmesg <==
	[Feb10 11:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052126] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037131] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.832084] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.914762] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595808] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.169190] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.055821] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054708] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.167597] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.111927] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.243676] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[ +12.968158] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.063697] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.411495] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +7.061368] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.526173] systemd-fstab-generator[1838]: Ignoring "noauto" option for root device
	[  +4.540926] kauditd_printk_skb: 59 callbacks suppressed
	[Feb10 11:29] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [ad8b1666802b256526304b04ed55585fdcb811e1f01b273c6ba703f83d6173bf] <==
	{"level":"info","ts":"2025-02-10T11:28:46.790Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"1a622f206f99396a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-10T11:28:46.790Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-10T11:28:46.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a switched to configuration voters=(1901133809061542250)"}
	{"level":"info","ts":"2025-02-10T11:28:46.792Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","added-peer-id":"1a622f206f99396a","added-peer-peer-urls":["https://192.168.39.60:2380"]}
	{"level":"info","ts":"2025-02-10T11:28:46.792Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T11:28:46.792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T11:28:46.795Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-10T11:28:46.795Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1a622f206f99396a","initial-advertise-peer-urls":["https://192.168.39.60:2380"],"listen-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.60:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-10T11:28:46.795Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-10T11:28:46.795Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2025-02-10T11:28:46.795Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2025-02-10T11:28:48.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-10T11:28:48.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-10T11:28:48.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgPreVoteResp from 1a622f206f99396a at term 2"}
	{"level":"info","ts":"2025-02-10T11:28:48.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became candidate at term 3"}
	{"level":"info","ts":"2025-02-10T11:28:48.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgVoteResp from 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2025-02-10T11:28:48.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became leader at term 3"}
	{"level":"info","ts":"2025-02-10T11:28:48.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a622f206f99396a elected leader 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2025-02-10T11:28:48.373Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"1a622f206f99396a","local-member-attributes":"{Name:test-preload-971370 ClientURLs:[https://192.168.39.60:2379]}","request-path":"/0/members/1a622f206f99396a/attributes","cluster-id":"94dd135126e1e7b0","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T11:28:48.373Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T11:28:48.373Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T11:28:48.375Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.60:2379"}
	{"level":"info","ts":"2025-02-10T11:28:48.375Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T11:28:48.375Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T11:28:48.375Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:29:07 up 0 min,  0 users,  load average: 0.83, 0.26, 0.09
	Linux test-preload-971370 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [31ee8af850113e55f1038d8aa1c19055c54c0c42242eb31fc5a4f800ecec8544] <==
	I0210 11:28:50.664359       1 naming_controller.go:291] Starting NamingConditionController
	I0210 11:28:50.664554       1 establishing_controller.go:76] Starting EstablishingController
	I0210 11:28:50.664593       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0210 11:28:50.664623       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 11:28:50.664655       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0210 11:28:50.698276       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 11:28:50.714685       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 11:28:50.837508       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0210 11:28:50.846235       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 11:28:50.846885       1 cache.go:39] Caches are synced for autoregister controller
	I0210 11:28:50.849242       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0210 11:28:50.850112       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0210 11:28:50.850805       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0210 11:28:50.851312       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 11:28:50.860592       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0210 11:28:51.302802       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0210 11:28:51.651537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 11:28:52.005799       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0210 11:28:52.017865       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0210 11:28:52.048809       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0210 11:28:52.064551       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 11:28:52.070226       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 11:28:53.138184       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0210 11:29:03.892212       1 controller.go:611] quota admission added evaluator for: endpoints
	I0210 11:29:03.908341       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2e9db4c6b8ffbccfc537e8df2afa0df425db1dd305e9c767c5447442f529d3e2] <==
	I0210 11:29:03.867152       1 range_allocator.go:173] Starting range CIDR allocator
	I0210 11:29:03.867169       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0210 11:29:03.867186       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0210 11:29:03.867239       1 shared_informer.go:262] Caches are synced for crt configmap
	I0210 11:29:03.869218       1 shared_informer.go:262] Caches are synced for endpoint
	I0210 11:29:03.871391       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0210 11:29:03.873921       1 shared_informer.go:262] Caches are synced for disruption
	I0210 11:29:03.873952       1 disruption.go:371] Sending events to api server.
	I0210 11:29:03.874017       1 shared_informer.go:262] Caches are synced for attach detach
	I0210 11:29:03.876160       1 shared_informer.go:262] Caches are synced for PVC protection
	I0210 11:29:03.876205       1 shared_informer.go:262] Caches are synced for TTL
	I0210 11:29:03.878947       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0210 11:29:03.957056       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0210 11:29:03.960410       1 shared_informer.go:262] Caches are synced for job
	I0210 11:29:03.965905       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0210 11:29:03.968187       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 11:29:03.969421       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 11:29:03.971745       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 11:29:03.971789       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 11:29:04.072789       1 shared_informer.go:262] Caches are synced for resource quota
	I0210 11:29:04.099958       1 shared_informer.go:262] Caches are synced for cronjob
	I0210 11:29:04.111341       1 shared_informer.go:262] Caches are synced for resource quota
	I0210 11:29:04.521168       1 shared_informer.go:262] Caches are synced for garbage collector
	I0210 11:29:04.548367       1 shared_informer.go:262] Caches are synced for garbage collector
	I0210 11:29:04.548454       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [20cb1cc3bf897d617c9a1d6d33319bf9c4389b4b980f23fd4a03df79743043e3] <==
	I0210 11:28:53.100147       1 node.go:163] Successfully retrieved node IP: 192.168.39.60
	I0210 11:28:53.100374       1 server_others.go:138] "Detected node IP" address="192.168.39.60"
	I0210 11:28:53.100450       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0210 11:28:53.127418       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0210 11:28:53.127493       1 server_others.go:206] "Using iptables Proxier"
	I0210 11:28:53.127767       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0210 11:28:53.128094       1 server.go:661] "Version info" version="v1.24.4"
	I0210 11:28:53.128192       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 11:28:53.129870       1 config.go:317] "Starting service config controller"
	I0210 11:28:53.130108       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0210 11:28:53.130156       1 config.go:226] "Starting endpoint slice config controller"
	I0210 11:28:53.130173       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0210 11:28:53.134829       1 config.go:444] "Starting node config controller"
	I0210 11:28:53.135606       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0210 11:28:53.230819       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0210 11:28:53.230914       1 shared_informer.go:262] Caches are synced for service config
	I0210 11:28:53.235755       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [072d6e2ad42e25f81f70150c0a3203e3e406f9f863f2719a25866f3188e77033] <==
	I0210 11:28:47.518003       1 serving.go:348] Generated self-signed cert in-memory
	W0210 11:28:50.720901       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 11:28:50.720967       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 11:28:50.720983       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 11:28:50.721031       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 11:28:50.789353       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0210 11:28:50.789400       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 11:28:50.796775       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 11:28:50.796824       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 11:28:50.801880       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0210 11:28:50.802181       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0210 11:28:50.898466       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 11:28:50 test-preload-971370 kubelet[1127]: I0210 11:28:50.821952    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmftn\" (UniqueName: \"kubernetes.io/projected/beae22b7-5816-45ce-a8a7-94ce57abcbe8-kube-api-access-gmftn\") pod \"coredns-6d4b75cb6d-h5bpg\" (UID: \"beae22b7-5816-45ce-a8a7-94ce57abcbe8\") " pod="kube-system/coredns-6d4b75cb6d-h5bpg"
	Feb 10 11:28:50 test-preload-971370 kubelet[1127]: I0210 11:28:50.822004    1127 reconciler.go:159] "Reconciler: start to sync state"
	Feb 10 11:28:50 test-preload-971370 kubelet[1127]: E0210 11:28:50.925641    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 11:28:50 test-preload-971370 kubelet[1127]: E0210 11:28:50.925864    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume podName:beae22b7-5816-45ce-a8a7-94ce57abcbe8 nodeName:}" failed. No retries permitted until 2025-02-10 11:28:51.425791172 +0000 UTC m=+5.777801170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume") pod "coredns-6d4b75cb6d-h5bpg" (UID: "beae22b7-5816-45ce-a8a7-94ce57abcbe8") : object "kube-system"/"coredns" not registered
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: I0210 11:28:51.134855    1127 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-971370"
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: I0210 11:28:51.134965    1127 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-971370"
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: I0210 11:28:51.137146    1127 setters.go:532] "Node became not ready" node="test-preload-971370" condition={Type:Ready Status:False LastHeartbeatTime:2025-02-10 11:28:51.137086496 +0000 UTC m=+5.489096479 LastTransitionTime:2025-02-10 11:28:51.137086496 +0000 UTC m=+5.489096479 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: E0210 11:28:51.428472    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: E0210 11:28:51.428767    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume podName:beae22b7-5816-45ce-a8a7-94ce57abcbe8 nodeName:}" failed. No retries permitted until 2025-02-10 11:28:52.428686311 +0000 UTC m=+6.780696311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume") pod "coredns-6d4b75cb6d-h5bpg" (UID: "beae22b7-5816-45ce-a8a7-94ce57abcbe8") : object "kube-system"/"coredns" not registered
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: I0210 11:28:51.875791    1127 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4b75e04e-2c37-4629-bb44-f6bcef416c96 path="/var/lib/kubelet/pods/4b75e04e-2c37-4629-bb44-f6bcef416c96/volumes"
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: E0210 11:28:51.926414    1127 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Feb 10 11:28:51 test-preload-971370 kubelet[1127]: E0210 11:28:51.926494    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/52a93f6c-4fef-4563-8aff-387c04515457-kube-proxy podName:52a93f6c-4fef-4563-8aff-387c04515457 nodeName:}" failed. No retries permitted until 2025-02-10 11:28:52.426475248 +0000 UTC m=+6.778485244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/52a93f6c-4fef-4563-8aff-387c04515457-kube-proxy") pod "kube-proxy-s595q" (UID: "52a93f6c-4fef-4563-8aff-387c04515457") : failed to sync configmap cache: timed out waiting for the condition
	Feb 10 11:28:52 test-preload-971370 kubelet[1127]: E0210 11:28:52.436071    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 11:28:52 test-preload-971370 kubelet[1127]: E0210 11:28:52.436156    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume podName:beae22b7-5816-45ce-a8a7-94ce57abcbe8 nodeName:}" failed. No retries permitted until 2025-02-10 11:28:54.436141727 +0000 UTC m=+8.788151712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume") pod "coredns-6d4b75cb6d-h5bpg" (UID: "beae22b7-5816-45ce-a8a7-94ce57abcbe8") : object "kube-system"/"coredns" not registered
	Feb 10 11:28:52 test-preload-971370 kubelet[1127]: E0210 11:28:52.870044    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-h5bpg" podUID=beae22b7-5816-45ce-a8a7-94ce57abcbe8
	Feb 10 11:28:52 test-preload-971370 kubelet[1127]: I0210 11:28:52.906325    1127 scope.go:110] "RemoveContainer" containerID="3ec23cec6e50d45ca6247de0ad862342715e708e1dd5759e4c77ca2e376d39cb"
	Feb 10 11:28:53 test-preload-971370 kubelet[1127]: I0210 11:28:53.923390    1127 scope.go:110] "RemoveContainer" containerID="3ec23cec6e50d45ca6247de0ad862342715e708e1dd5759e4c77ca2e376d39cb"
	Feb 10 11:28:53 test-preload-971370 kubelet[1127]: I0210 11:28:53.923864    1127 scope.go:110] "RemoveContainer" containerID="107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404"
	Feb 10 11:28:53 test-preload-971370 kubelet[1127]: E0210 11:28:53.924154    1127 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(baa22e7e-9325-461d-9bfb-0bce77dd1108)\"" pod="kube-system/storage-provisioner" podUID=baa22e7e-9325-461d-9bfb-0bce77dd1108
	Feb 10 11:28:54 test-preload-971370 kubelet[1127]: E0210 11:28:54.453052    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 11:28:54 test-preload-971370 kubelet[1127]: E0210 11:28:54.453129    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume podName:beae22b7-5816-45ce-a8a7-94ce57abcbe8 nodeName:}" failed. No retries permitted until 2025-02-10 11:28:58.453115193 +0000 UTC m=+12.805125177 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/beae22b7-5816-45ce-a8a7-94ce57abcbe8-config-volume") pod "coredns-6d4b75cb6d-h5bpg" (UID: "beae22b7-5816-45ce-a8a7-94ce57abcbe8") : object "kube-system"/"coredns" not registered
	Feb 10 11:28:54 test-preload-971370 kubelet[1127]: E0210 11:28:54.870058    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-h5bpg" podUID=beae22b7-5816-45ce-a8a7-94ce57abcbe8
	Feb 10 11:28:54 test-preload-971370 kubelet[1127]: I0210 11:28:54.928290    1127 scope.go:110] "RemoveContainer" containerID="107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404"
	Feb 10 11:28:54 test-preload-971370 kubelet[1127]: E0210 11:28:54.928480    1127 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(baa22e7e-9325-461d-9bfb-0bce77dd1108)\"" pod="kube-system/storage-provisioner" podUID=baa22e7e-9325-461d-9bfb-0bce77dd1108
	Feb 10 11:29:06 test-preload-971370 kubelet[1127]: I0210 11:29:06.870807    1127 scope.go:110] "RemoveContainer" containerID="107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404"
	
	
	==> storage-provisioner [107fe86d8c903cc846cd0d9e05a28b2927288a6ab70497306d08455d8ed56404] <==
	I0210 11:28:53.041189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0210 11:28:53.043968       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [5a943e8d66c08fe314add05d0b391b38c0b0ba5771b883f6b117d578971b1f44] <==
	I0210 11:29:06.967153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 11:29:06.985574       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 11:29:06.986240       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-971370 -n test-preload-971370
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-971370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-971370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-971370
--- FAIL: TestPreload (169.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (412.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m32.819894261s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-557458] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-557458" primary control-plane node in "kubernetes-upgrade-557458" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:34:35.029123  154493 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:34:35.029267  154493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:34:35.029276  154493 out.go:358] Setting ErrFile to fd 2...
	I0210 11:34:35.029281  154493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:34:35.029507  154493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:34:35.030095  154493 out.go:352] Setting JSON to false
	I0210 11:34:35.031032  154493 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8217,"bootTime":1739179058,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:34:35.031133  154493 start.go:139] virtualization: kvm guest
	I0210 11:34:35.033083  154493 out.go:177] * [kubernetes-upgrade-557458] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:34:35.034229  154493 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:34:35.034246  154493 notify.go:220] Checking for updates...
	I0210 11:34:35.036491  154493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:34:35.037608  154493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:34:35.038642  154493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:34:35.039751  154493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:34:35.040852  154493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:34:35.042265  154493 config.go:182] Loaded profile config "NoKubernetes-460172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0210 11:34:35.042403  154493 config.go:182] Loaded profile config "cert-expiration-038969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:34:35.042500  154493 config.go:182] Loaded profile config "running-upgrade-593595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0210 11:34:35.042618  154493 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:34:35.075735  154493 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 11:34:35.076739  154493 start.go:297] selected driver: kvm2
	I0210 11:34:35.076755  154493 start.go:901] validating driver "kvm2" against <nil>
	I0210 11:34:35.076769  154493 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:34:35.077521  154493 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:34:35.077620  154493 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:34:35.095384  154493 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:34:35.095429  154493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 11:34:35.095672  154493 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 11:34:35.095707  154493 cni.go:84] Creating CNI manager for ""
	I0210 11:34:35.095761  154493 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:34:35.095773  154493 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 11:34:35.095834  154493 start.go:340] cluster config:
	{Name:kubernetes-upgrade-557458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-557458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:34:35.095955  154493 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:34:35.097517  154493 out.go:177] * Starting "kubernetes-upgrade-557458" primary control-plane node in "kubernetes-upgrade-557458" cluster
	I0210 11:34:35.099039  154493 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 11:34:35.099079  154493 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 11:34:35.099087  154493 cache.go:56] Caching tarball of preloaded images
	I0210 11:34:35.099166  154493 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 11:34:35.099178  154493 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 11:34:35.099327  154493 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/config.json ...
	I0210 11:34:35.099358  154493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/config.json: {Name:mk02654cd02e4d37270ff30e424ebb1887a37677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:34:35.099530  154493 start.go:360] acquireMachinesLock for kubernetes-upgrade-557458: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:34:35.099576  154493 start.go:364] duration metric: took 23.932µs to acquireMachinesLock for "kubernetes-upgrade-557458"
	I0210 11:34:35.099601  154493 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-557458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-557458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:34:35.099682  154493 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 11:34:35.101139  154493 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 11:34:35.101315  154493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:34:35.101356  154493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:34:35.117569  154493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0210 11:34:35.118108  154493 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:34:35.118723  154493 main.go:141] libmachine: Using API Version  1
	I0210 11:34:35.118755  154493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:34:35.119087  154493 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:34:35.119367  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetMachineName
	I0210 11:34:35.119528  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:34:35.119711  154493 start.go:159] libmachine.API.Create for "kubernetes-upgrade-557458" (driver="kvm2")
	I0210 11:34:35.119752  154493 client.go:168] LocalClient.Create starting
	I0210 11:34:35.119791  154493 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem
	I0210 11:34:35.119829  154493 main.go:141] libmachine: Decoding PEM data...
	I0210 11:34:35.119844  154493 main.go:141] libmachine: Parsing certificate...
	I0210 11:34:35.119910  154493 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem
	I0210 11:34:35.119943  154493 main.go:141] libmachine: Decoding PEM data...
	I0210 11:34:35.119967  154493 main.go:141] libmachine: Parsing certificate...
	I0210 11:34:35.119993  154493 main.go:141] libmachine: Running pre-create checks...
	I0210 11:34:35.120006  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .PreCreateCheck
	I0210 11:34:35.120374  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetConfigRaw
	I0210 11:34:35.120995  154493 main.go:141] libmachine: Creating machine...
	I0210 11:34:35.121016  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .Create
	I0210 11:34:35.121168  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) creating KVM machine...
	I0210 11:34:35.121186  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) creating network...
	I0210 11:34:35.122487  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found existing default KVM network
	I0210 11:34:35.124028  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:35.123851  154517 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4d:30:2e} reservation:<nil>}
	I0210 11:34:35.125672  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:35.125580  154517 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000316a40}
	I0210 11:34:35.125699  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | created network xml: 
	I0210 11:34:35.125712  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | <network>
	I0210 11:34:35.125725  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |   <name>mk-kubernetes-upgrade-557458</name>
	I0210 11:34:35.125740  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |   <dns enable='no'/>
	I0210 11:34:35.125751  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |   
	I0210 11:34:35.125766  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0210 11:34:35.125788  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |     <dhcp>
	I0210 11:34:35.125802  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0210 11:34:35.125812  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |     </dhcp>
	I0210 11:34:35.125822  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |   </ip>
	I0210 11:34:35.125833  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG |   
	I0210 11:34:35.125846  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | </network>
	I0210 11:34:35.125858  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | 
	I0210 11:34:35.131096  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | trying to create private KVM network mk-kubernetes-upgrade-557458 192.168.50.0/24...
	I0210 11:34:35.211011  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | private KVM network mk-kubernetes-upgrade-557458 192.168.50.0/24 created
	I0210 11:34:35.211057  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) setting up store path in /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458 ...
	I0210 11:34:35.211071  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:35.210968  154517 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:34:35.211093  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) building disk image from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 11:34:35.211122  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Downloading /home/jenkins/minikube-integration/20385-109271/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:34:35.491250  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:35.491135  154517 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa...
	I0210 11:34:35.635115  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:35.634892  154517 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/kubernetes-upgrade-557458.rawdisk...
	I0210 11:34:35.635160  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Writing magic tar header
	I0210 11:34:35.635201  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458 (perms=drwx------)
	I0210 11:34:35.635212  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Writing SSH key tar header
	I0210 11:34:35.635262  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines (perms=drwxr-xr-x)
	I0210 11:34:35.635304  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube (perms=drwxr-xr-x)
	I0210 11:34:35.635317  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:35.635019  154517 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458 ...
	I0210 11:34:35.635335  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458
	I0210 11:34:35.635350  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines
	I0210 11:34:35.635374  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) setting executable bit set on /home/jenkins/minikube-integration/20385-109271 (perms=drwxrwxr-x)
	I0210 11:34:35.635391  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 11:34:35.635400  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:34:35.635406  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 11:34:35.635422  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) creating domain...
	I0210 11:34:35.635439  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271
	I0210 11:34:35.635457  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 11:34:35.635473  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | checking permissions on dir: /home/jenkins
	I0210 11:34:35.635515  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | checking permissions on dir: /home
	I0210 11:34:35.635556  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | skipping /home - not owner
	I0210 11:34:35.636578  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) define libvirt domain using xml: 
	I0210 11:34:35.636603  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) <domain type='kvm'>
	I0210 11:34:35.636616  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   <name>kubernetes-upgrade-557458</name>
	I0210 11:34:35.636630  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   <memory unit='MiB'>2200</memory>
	I0210 11:34:35.636643  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   <vcpu>2</vcpu>
	I0210 11:34:35.636651  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   <features>
	I0210 11:34:35.636661  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <acpi/>
	I0210 11:34:35.636674  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <apic/>
	I0210 11:34:35.636687  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <pae/>
	I0210 11:34:35.636698  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     
	I0210 11:34:35.636710  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   </features>
	I0210 11:34:35.636726  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   <cpu mode='host-passthrough'>
	I0210 11:34:35.636760  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   
	I0210 11:34:35.636792  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   </cpu>
	I0210 11:34:35.636806  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   <os>
	I0210 11:34:35.636818  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <type>hvm</type>
	I0210 11:34:35.636831  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <boot dev='cdrom'/>
	I0210 11:34:35.636841  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <boot dev='hd'/>
	I0210 11:34:35.636851  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <bootmenu enable='no'/>
	I0210 11:34:35.636861  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   </os>
	I0210 11:34:35.636869  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   <devices>
	I0210 11:34:35.636881  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <disk type='file' device='cdrom'>
	I0210 11:34:35.636896  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/boot2docker.iso'/>
	I0210 11:34:35.636907  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <target dev='hdc' bus='scsi'/>
	I0210 11:34:35.636916  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <readonly/>
	I0210 11:34:35.636949  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     </disk>
	I0210 11:34:35.636964  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <disk type='file' device='disk'>
	I0210 11:34:35.636982  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 11:34:35.637000  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/kubernetes-upgrade-557458.rawdisk'/>
	I0210 11:34:35.637012  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <target dev='hda' bus='virtio'/>
	I0210 11:34:35.637038  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     </disk>
	I0210 11:34:35.637065  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <interface type='network'>
	I0210 11:34:35.637080  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <source network='mk-kubernetes-upgrade-557458'/>
	I0210 11:34:35.637092  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <model type='virtio'/>
	I0210 11:34:35.637104  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     </interface>
	I0210 11:34:35.637115  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <interface type='network'>
	I0210 11:34:35.637134  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <source network='default'/>
	I0210 11:34:35.637148  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <model type='virtio'/>
	I0210 11:34:35.637167  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     </interface>
	I0210 11:34:35.637178  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <serial type='pty'>
	I0210 11:34:35.637188  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <target port='0'/>
	I0210 11:34:35.637198  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     </serial>
	I0210 11:34:35.637207  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <console type='pty'>
	I0210 11:34:35.637217  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <target type='serial' port='0'/>
	I0210 11:34:35.637223  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     </console>
	I0210 11:34:35.637233  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     <rng model='virtio'>
	I0210 11:34:35.637245  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)       <backend model='random'>/dev/random</backend>
	I0210 11:34:35.637257  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     </rng>
	I0210 11:34:35.637267  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     
	I0210 11:34:35.637278  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)     
	I0210 11:34:35.637289  154493 main.go:141] libmachine: (kubernetes-upgrade-557458)   </devices>
	I0210 11:34:35.637296  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) </domain>
	I0210 11:34:35.637311  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) 
	I0210 11:34:35.642256  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:5b:b9:af in network default
	I0210 11:34:35.642836  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) starting domain...
	I0210 11:34:35.642864  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:35.642874  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) ensuring networks are active...
	I0210 11:34:35.643637  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Ensuring network default is active
	I0210 11:34:35.643956  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Ensuring network mk-kubernetes-upgrade-557458 is active
	I0210 11:34:35.644529  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) getting domain XML...
	I0210 11:34:35.645321  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) creating domain...
	I0210 11:34:36.987475  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) waiting for IP...
	I0210 11:34:36.988139  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:36.988574  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:36.988642  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:36.988567  154517 retry.go:31] will retry after 297.320163ms: waiting for domain to come up
	I0210 11:34:37.287267  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:37.287737  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:37.287771  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:37.287715  154517 retry.go:31] will retry after 272.222359ms: waiting for domain to come up
	I0210 11:34:37.561347  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:37.561965  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:37.561998  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:37.561945  154517 retry.go:31] will retry after 446.382985ms: waiting for domain to come up
	I0210 11:34:38.009718  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:38.010105  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:38.010142  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:38.010086  154517 retry.go:31] will retry after 442.798268ms: waiting for domain to come up
	I0210 11:34:38.454468  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:38.455048  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:38.455081  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:38.455004  154517 retry.go:31] will retry after 700.554663ms: waiting for domain to come up
	I0210 11:34:39.157258  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:39.157860  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:39.157887  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:39.157827  154517 retry.go:31] will retry after 871.957772ms: waiting for domain to come up
	I0210 11:34:40.031223  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:40.031778  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:40.031811  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:40.031760  154517 retry.go:31] will retry after 791.861104ms: waiting for domain to come up
	I0210 11:34:40.825516  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:40.825933  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:40.825974  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:40.825905  154517 retry.go:31] will retry after 958.170716ms: waiting for domain to come up
	I0210 11:34:41.785774  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:41.786228  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:41.786254  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:41.786190  154517 retry.go:31] will retry after 1.328977081s: waiting for domain to come up
	I0210 11:34:43.520044  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:43.520540  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:43.520561  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:43.520522  154517 retry.go:31] will retry after 1.617885997s: waiting for domain to come up
	I0210 11:34:45.139709  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:45.140171  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:45.140203  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:45.140142  154517 retry.go:31] will retry after 2.384520153s: waiting for domain to come up
	I0210 11:34:47.527632  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:47.528271  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:47.528301  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:47.528251  154517 retry.go:31] will retry after 3.126902113s: waiting for domain to come up
	I0210 11:34:50.656408  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:50.656923  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:50.656962  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:50.656901  154517 retry.go:31] will retry after 4.077794171s: waiting for domain to come up
	I0210 11:34:54.736273  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:54.736784  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find current IP address of domain kubernetes-upgrade-557458 in network mk-kubernetes-upgrade-557458
	I0210 11:34:54.736812  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | I0210 11:34:54.736756  154517 retry.go:31] will retry after 4.745049211s: waiting for domain to come up
	I0210 11:34:59.487370  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:59.487825  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) found domain IP: 192.168.50.30
	I0210 11:34:59.487851  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) reserving static IP address...
	I0210 11:34:59.487864  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has current primary IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:59.488235  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-557458", mac: "52:54:00:4b:eb:a6", ip: "192.168.50.30"} in network mk-kubernetes-upgrade-557458
	I0210 11:34:59.565179  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Getting to WaitForSSH function...
	I0210 11:34:59.565210  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) reserved static IP address 192.168.50.30 for domain kubernetes-upgrade-557458
	I0210 11:34:59.565224  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) waiting for SSH...
	I0210 11:34:59.568234  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:34:59.568527  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458
	I0210 11:34:59.568554  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-557458 interface with MAC address 52:54:00:4b:eb:a6
	I0210 11:34:59.568698  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Using SSH client type: external
	I0210 11:34:59.568728  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa (-rw-------)
	I0210 11:34:59.568764  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:34:59.568779  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | About to run SSH command:
	I0210 11:34:59.568793  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | exit 0
	I0210 11:34:59.572994  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | SSH cmd err, output: exit status 255: 
	I0210 11:34:59.573023  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0210 11:34:59.573037  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | command : exit 0
	I0210 11:34:59.573046  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | err     : exit status 255
	I0210 11:34:59.573053  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | output  : 
	I0210 11:35:02.574744  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Getting to WaitForSSH function...
	I0210 11:35:02.577344  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.577716  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:02.577746  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.577950  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Using SSH client type: external
	I0210 11:35:02.577971  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa (-rw-------)
	I0210 11:35:02.578018  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:35:02.578039  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | About to run SSH command:
	I0210 11:35:02.578060  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | exit 0
	I0210 11:35:02.703359  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | SSH cmd err, output: <nil>: 
	I0210 11:35:02.703660  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) KVM machine creation complete
	I0210 11:35:02.704034  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetConfigRaw
	I0210 11:35:02.704645  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:35:02.704838  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:35:02.705012  154493 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 11:35:02.705028  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetState
	I0210 11:35:02.706303  154493 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 11:35:02.706324  154493 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 11:35:02.706331  154493 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 11:35:02.706337  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:02.708967  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.709289  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:02.709320  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.709439  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:02.709657  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:02.709863  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:02.710067  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:02.710246  154493 main.go:141] libmachine: Using SSH client type: native
	I0210 11:35:02.710539  154493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:35:02.710552  154493 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 11:35:02.814190  154493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:35:02.814216  154493 main.go:141] libmachine: Detecting the provisioner...
	I0210 11:35:02.814225  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:02.816830  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.817200  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:02.817223  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.817475  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:02.817663  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:02.817794  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:02.817900  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:02.818006  154493 main.go:141] libmachine: Using SSH client type: native
	I0210 11:35:02.818178  154493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:35:02.818189  154493 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 11:35:02.923615  154493 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 11:35:02.923707  154493 main.go:141] libmachine: found compatible host: buildroot
	I0210 11:35:02.923721  154493 main.go:141] libmachine: Provisioning with buildroot...
	I0210 11:35:02.923734  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetMachineName
	I0210 11:35:02.924003  154493 buildroot.go:166] provisioning hostname "kubernetes-upgrade-557458"
	I0210 11:35:02.924036  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetMachineName
	I0210 11:35:02.924243  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:02.926910  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.927355  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:02.927388  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:02.927553  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:02.927722  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:02.927899  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:02.928031  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:02.928200  154493 main.go:141] libmachine: Using SSH client type: native
	I0210 11:35:02.928410  154493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:35:02.928426  154493 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-557458 && echo "kubernetes-upgrade-557458" | sudo tee /etc/hostname
	I0210 11:35:03.048791  154493 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-557458
	
	I0210 11:35:03.048827  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.052170  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.052538  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.052567  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.052736  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:03.052936  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.053111  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.053296  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:03.053510  154493 main.go:141] libmachine: Using SSH client type: native
	I0210 11:35:03.053743  154493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:35:03.053771  154493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-557458' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-557458/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-557458' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:35:03.166957  154493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:35:03.166988  154493 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:35:03.167006  154493 buildroot.go:174] setting up certificates
	I0210 11:35:03.167018  154493 provision.go:84] configureAuth start
	I0210 11:35:03.167030  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetMachineName
	I0210 11:35:03.167384  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetIP
	I0210 11:35:03.170183  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.170538  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.170572  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.170766  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.173154  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.173516  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.173544  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.173694  154493 provision.go:143] copyHostCerts
	I0210 11:35:03.173758  154493 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:35:03.173774  154493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:35:03.173851  154493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:35:03.173969  154493 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:35:03.173980  154493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:35:03.174010  154493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:35:03.174065  154493 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:35:03.174072  154493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:35:03.174095  154493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:35:03.174142  154493 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-557458 san=[127.0.0.1 192.168.50.30 kubernetes-upgrade-557458 localhost minikube]
	I0210 11:35:03.349837  154493 provision.go:177] copyRemoteCerts
	I0210 11:35:03.349908  154493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:35:03.349938  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.352394  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.352756  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.352791  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.352941  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:03.353132  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.353311  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:03.353457  154493 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:35:03.436885  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:35:03.460063  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0210 11:35:03.481959  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:35:03.503440  154493 provision.go:87] duration metric: took 336.410109ms to configureAuth
	I0210 11:35:03.503469  154493 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:35:03.503642  154493 config.go:182] Loaded profile config "kubernetes-upgrade-557458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 11:35:03.503718  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.506270  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.506566  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.506595  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.506736  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:03.506936  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.507106  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.507225  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:03.507397  154493 main.go:141] libmachine: Using SSH client type: native
	I0210 11:35:03.507581  154493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:35:03.507603  154493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:35:03.731553  154493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:35:03.731592  154493 main.go:141] libmachine: Checking connection to Docker...
	I0210 11:35:03.731601  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetURL
	I0210 11:35:03.732889  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | using libvirt version 6000000
	I0210 11:35:03.735390  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.735775  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.735795  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.736024  154493 main.go:141] libmachine: Docker is up and running!
	I0210 11:35:03.736052  154493 main.go:141] libmachine: Reticulating splines...
	I0210 11:35:03.736062  154493 client.go:171] duration metric: took 28.616295075s to LocalClient.Create
	I0210 11:35:03.736085  154493 start.go:167] duration metric: took 28.616376451s to libmachine.API.Create "kubernetes-upgrade-557458"
	I0210 11:35:03.736099  154493 start.go:293] postStartSetup for "kubernetes-upgrade-557458" (driver="kvm2")
	I0210 11:35:03.736112  154493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:35:03.736130  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:35:03.736366  154493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:35:03.736399  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.738551  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.738875  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.738915  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.739054  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:03.739262  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.739461  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:03.739597  154493 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:35:03.821932  154493 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:35:03.825840  154493 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:35:03.825868  154493 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:35:03.825976  154493 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:35:03.826046  154493 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:35:03.826130  154493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:35:03.835259  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:35:03.858293  154493 start.go:296] duration metric: took 122.175941ms for postStartSetup
	I0210 11:35:03.858357  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetConfigRaw
	I0210 11:35:03.859007  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetIP
	I0210 11:35:03.861475  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.861785  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.861814  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.862024  154493 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/config.json ...
	I0210 11:35:03.862231  154493 start.go:128] duration metric: took 28.762535971s to createHost
	I0210 11:35:03.862262  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.864550  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.864893  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.864923  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.865096  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:03.865282  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.865456  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.865650  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:03.865812  154493 main.go:141] libmachine: Using SSH client type: native
	I0210 11:35:03.865989  154493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:35:03.866002  154493 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:35:03.971790  154493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739187303.954274570
	
	I0210 11:35:03.971813  154493 fix.go:216] guest clock: 1739187303.954274570
	I0210 11:35:03.971824  154493 fix.go:229] Guest: 2025-02-10 11:35:03.95427457 +0000 UTC Remote: 2025-02-10 11:35:03.862247453 +0000 UTC m=+28.873444783 (delta=92.027117ms)
	I0210 11:35:03.971866  154493 fix.go:200] guest clock delta is within tolerance: 92.027117ms
	I0210 11:35:03.971876  154493 start.go:83] releasing machines lock for "kubernetes-upgrade-557458", held for 28.872288349s
	I0210 11:35:03.971899  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:35:03.972185  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetIP
	I0210 11:35:03.974912  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.975301  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.975334  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.975509  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:35:03.976052  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:35:03.976225  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:35:03.976313  154493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:35:03.976364  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.976468  154493 ssh_runner.go:195] Run: cat /version.json
	I0210 11:35:03.976492  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:35:03.979153  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.979561  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.979597  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.979621  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.979779  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:03.979965  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.979997  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:03.980032  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:03.980110  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:03.980223  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:35:03.980314  154493 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:35:03.980355  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:35:03.980505  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:35:03.980643  154493 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:35:04.088557  154493 ssh_runner.go:195] Run: systemctl --version
	I0210 11:35:04.095820  154493 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:35:04.255536  154493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:35:04.261484  154493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:35:04.261565  154493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:35:04.276341  154493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:35:04.276368  154493 start.go:495] detecting cgroup driver to use...
	I0210 11:35:04.276435  154493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:35:04.291425  154493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:35:04.304855  154493 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:35:04.304921  154493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:35:04.318178  154493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:35:04.333560  154493 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:35:04.452100  154493 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:35:04.615905  154493 docker.go:233] disabling docker service ...
	I0210 11:35:04.615981  154493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:35:04.630015  154493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:35:04.651065  154493 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:35:04.778226  154493 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:35:04.904170  154493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:35:04.918315  154493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:35:04.936028  154493 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 11:35:04.936097  154493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:35:04.945720  154493 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:35:04.945801  154493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:35:04.955373  154493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:35:04.964957  154493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:35:04.974651  154493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:35:04.984257  154493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:35:04.993257  154493 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:35:04.993316  154493 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:35:05.007068  154493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:35:05.016854  154493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:35:05.138094  154493 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:35:05.231521  154493 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:35:05.231618  154493 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:35:05.238330  154493 start.go:563] Will wait 60s for crictl version
	I0210 11:35:05.238402  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:05.242301  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:35:05.294390  154493 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:35:05.294476  154493 ssh_runner.go:195] Run: crio --version
	I0210 11:35:05.321956  154493 ssh_runner.go:195] Run: crio --version
	I0210 11:35:05.359078  154493 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 11:35:05.360587  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetIP
	I0210 11:35:05.363627  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:05.364034  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:34:49 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:35:05.364067  154493 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:35:05.364314  154493 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0210 11:35:05.368434  154493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:35:05.380302  154493 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-557458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-557458 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:35:05.380460  154493 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 11:35:05.380517  154493 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:35:05.409088  154493 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 11:35:05.409163  154493 ssh_runner.go:195] Run: which lz4
	I0210 11:35:05.413508  154493 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:35:05.417861  154493 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:35:05.417891  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 11:35:06.898227  154493 crio.go:462] duration metric: took 1.484754952s to copy over tarball
	I0210 11:35:06.898327  154493 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:35:09.538119  154493 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.639746063s)
	I0210 11:35:09.538149  154493 crio.go:469] duration metric: took 2.639887414s to extract the tarball
	I0210 11:35:09.538182  154493 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:35:09.579794  154493 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:35:09.623454  154493 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 11:35:09.623483  154493 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 11:35:09.623542  154493 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:35:09.623562  154493 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 11:35:09.623595  154493 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:35:09.623618  154493 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:35:09.623638  154493 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:35:09.623538  154493 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:35:09.623606  154493 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 11:35:09.623609  154493 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:35:09.625149  154493 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:35:09.625213  154493 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:35:09.625236  154493 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:35:09.625299  154493 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 11:35:09.625149  154493 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:35:09.625396  154493 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:35:09.625420  154493 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 11:35:09.625397  154493 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:35:09.823821  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 11:35:09.830133  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 11:35:09.844745  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:35:09.868714  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:35:09.870666  154493 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 11:35:09.870716  154493 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 11:35:09.870755  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:09.886146  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:35:09.914587  154493 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 11:35:09.914652  154493 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 11:35:09.914707  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:09.925166  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 11:35:09.928470  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:35:09.960075  154493 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 11:35:09.960140  154493 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:35:09.960208  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:09.979825  154493 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 11:35:09.979879  154493 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:35:09.979884  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:35:09.979896  154493 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 11:35:09.979943  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:35:09.979956  154493 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:35:09.979990  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:09.979925  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:10.041355  154493 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 11:35:10.041380  154493 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 11:35:10.041419  154493 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:35:10.041425  154493 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:35:10.041433  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:35:10.041449  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:10.041472  154493 ssh_runner.go:195] Run: which crictl
	I0210 11:35:10.077114  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:35:10.077221  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:35:10.077239  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:35:10.077252  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:35:10.077295  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:35:10.077319  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:35:10.121770  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:35:10.237963  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:35:10.237990  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:35:10.238114  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:35:10.242671  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:35:10.242744  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:35:10.242867  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:35:10.247148  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:35:10.379681  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:35:10.379681  154493 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 11:35:10.383258  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:35:10.391369  154493 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 11:35:10.405565  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:35:10.405612  154493 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 11:35:10.405580  154493 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:35:10.459672  154493 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 11:35:10.468185  154493 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 11:35:10.491202  154493 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 11:35:10.491202  154493 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 11:35:10.710481  154493 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:35:10.847260  154493 cache_images.go:92] duration metric: took 1.223759655s to LoadCachedImages
	W0210 11:35:10.847361  154493 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0210 11:35:10.847375  154493 kubeadm.go:934] updating node { 192.168.50.30 8443 v1.20.0 crio true true} ...
	I0210 11:35:10.847494  154493 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-557458 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-557458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:35:10.847580  154493 ssh_runner.go:195] Run: crio config
	I0210 11:35:10.896874  154493 cni.go:84] Creating CNI manager for ""
	I0210 11:35:10.896900  154493 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:35:10.896910  154493 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:35:10.896929  154493 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-557458 NodeName:kubernetes-upgrade-557458 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 11:35:10.897098  154493 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-557458"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:35:10.897175  154493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 11:35:10.907121  154493 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:35:10.907244  154493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:35:10.916952  154493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0210 11:35:10.932925  154493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:35:10.948636  154493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0210 11:35:10.966361  154493 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0210 11:35:10.970573  154493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:35:10.983331  154493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:35:11.102992  154493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:35:11.121727  154493 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458 for IP: 192.168.50.30
	I0210 11:35:11.121762  154493 certs.go:194] generating shared ca certs ...
	I0210 11:35:11.121786  154493 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:35:11.121950  154493 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:35:11.122008  154493 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:35:11.122019  154493 certs.go:256] generating profile certs ...
	I0210 11:35:11.122087  154493 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/client.key
	I0210 11:35:11.122109  154493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/client.crt with IP's: []
	I0210 11:35:11.262579  154493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/client.crt ...
	I0210 11:35:11.262615  154493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/client.crt: {Name:mk173e7551511c9cbec8fe580d172eafdb9c25dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:35:11.262809  154493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/client.key ...
	I0210 11:35:11.262829  154493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/client.key: {Name:mkc5bd2da994dc16d96539d0482348ee2df682b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:35:11.262937  154493 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key.3052cc4e
	I0210 11:35:11.262955  154493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.crt.3052cc4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.30]
	I0210 11:35:11.404385  154493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.crt.3052cc4e ...
	I0210 11:35:11.404420  154493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.crt.3052cc4e: {Name:mk759f0d23a99d4d95725b090631c43051ddbf9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:35:11.404590  154493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key.3052cc4e ...
	I0210 11:35:11.404604  154493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key.3052cc4e: {Name:mk4c0c46638ff59e7b8df4c835aa5393a0aa6b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:35:11.404684  154493 certs.go:381] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.crt.3052cc4e -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.crt
	I0210 11:35:11.404762  154493 certs.go:385] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key.3052cc4e -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key
	I0210 11:35:11.404827  154493 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.key
	I0210 11:35:11.404845  154493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.crt with IP's: []
	I0210 11:35:11.538648  154493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.crt ...
	I0210 11:35:11.538687  154493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.crt: {Name:mk9d1332524796fa27e5baf219e50b9c473116eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:35:11.538869  154493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.key ...
	I0210 11:35:11.538882  154493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.key: {Name:mka320778ad8be4758e68efc331c9348acc86af6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:35:11.539044  154493 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:35:11.539082  154493 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:35:11.539092  154493 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:35:11.539114  154493 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:35:11.539137  154493 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:35:11.539164  154493 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:35:11.539232  154493 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:35:11.539807  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:35:11.564772  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:35:11.587602  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:35:11.609689  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:35:11.632326  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0210 11:35:11.655330  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:35:11.678339  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:35:11.701245  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:35:11.724119  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:35:11.747589  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:35:11.771564  154493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:35:11.793913  154493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:35:11.809677  154493 ssh_runner.go:195] Run: openssl version
	I0210 11:35:11.815370  154493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:35:11.825462  154493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:35:11.829749  154493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:35:11.829809  154493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:35:11.835250  154493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:35:11.845221  154493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:35:11.855948  154493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:35:11.860384  154493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:35:11.860462  154493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:35:11.866040  154493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:35:11.875894  154493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:35:11.886403  154493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:35:11.890628  154493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:35:11.890698  154493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:35:11.896124  154493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:35:11.906646  154493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:35:11.910732  154493 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:35:11.910794  154493 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-557458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-557458 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:35:11.910886  154493 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:35:11.910938  154493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:35:11.947035  154493 cri.go:89] found id: ""
	I0210 11:35:11.947122  154493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:35:11.962072  154493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:35:11.971684  154493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:35:11.985514  154493 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:35:11.985536  154493 kubeadm.go:157] found existing configuration files:
	
	I0210 11:35:11.985584  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:35:11.994796  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:35:11.994869  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:35:12.009820  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:35:12.023745  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:35:12.023813  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:35:12.034841  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:35:12.047244  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:35:12.047319  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:35:12.059583  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:35:12.070912  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:35:12.070977  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:35:12.079771  154493 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:35:12.198936  154493 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:35:12.199025  154493 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:35:12.337538  154493 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:35:12.337695  154493 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:35:12.337849  154493 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:35:12.505490  154493 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:35:12.508241  154493 out.go:235]   - Generating certificates and keys ...
	I0210 11:35:12.508340  154493 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:35:12.508405  154493 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:35:12.643126  154493 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 11:35:12.769051  154493 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 11:35:12.830436  154493 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 11:35:12.927064  154493 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 11:35:13.257087  154493 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 11:35:13.257336  154493 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-557458 localhost] and IPs [192.168.50.30 127.0.0.1 ::1]
	I0210 11:35:13.623004  154493 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 11:35:13.623280  154493 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-557458 localhost] and IPs [192.168.50.30 127.0.0.1 ::1]
	I0210 11:35:13.903056  154493 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 11:35:14.065224  154493 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 11:35:14.215968  154493 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 11:35:14.216141  154493 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:35:14.336601  154493 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:35:14.463926  154493 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:35:14.663964  154493 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:35:14.770057  154493 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:35:14.785919  154493 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:35:14.787089  154493 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:35:14.787211  154493 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:35:14.953191  154493 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:35:14.954929  154493 out.go:235]   - Booting up control plane ...
	I0210 11:35:14.955064  154493 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:35:14.962505  154493 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:35:14.963933  154493 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:35:14.964956  154493 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:35:14.975284  154493 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:35:54.971582  154493 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:35:54.971867  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:35:54.972155  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:35:59.972132  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:35:59.972498  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:36:09.971962  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:36:09.972253  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:36:29.972180  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:36:29.972450  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:37:09.973721  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:37:09.974009  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:37:09.974029  154493 kubeadm.go:310] 
	I0210 11:37:09.974082  154493 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:37:09.974164  154493 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:37:09.974186  154493 kubeadm.go:310] 
	I0210 11:37:09.974238  154493 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:37:09.974282  154493 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:37:09.974367  154493 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:37:09.974376  154493 kubeadm.go:310] 
	I0210 11:37:09.974478  154493 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:37:09.974526  154493 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:37:09.974551  154493 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:37:09.974555  154493 kubeadm.go:310] 
	I0210 11:37:09.974655  154493 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:37:09.974748  154493 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:37:09.974753  154493 kubeadm.go:310] 
	I0210 11:37:09.974880  154493 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:37:09.974995  154493 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:37:09.975106  154493 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:37:09.975217  154493 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:37:09.975236  154493 kubeadm.go:310] 
	I0210 11:37:09.976233  154493 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:37:09.976380  154493 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:37:09.976489  154493 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 11:37:09.976660  154493 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-557458 localhost] and IPs [192.168.50.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-557458 localhost] and IPs [192.168.50.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-557458 localhost] and IPs [192.168.50.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-557458 localhost] and IPs [192.168.50.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 11:37:09.976717  154493 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:37:10.460006  154493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:37:10.479270  154493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:37:10.490766  154493 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:37:10.490790  154493 kubeadm.go:157] found existing configuration files:
	
	I0210 11:37:10.490853  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:37:10.503739  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:37:10.503826  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:37:10.516996  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:37:10.526158  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:37:10.526220  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:37:10.535542  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:37:10.544766  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:37:10.544827  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:37:10.553759  154493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:37:10.562375  154493 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:37:10.562451  154493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:37:10.571307  154493 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:37:10.647262  154493 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:37:10.647496  154493 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:37:10.819210  154493 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:37:10.819412  154493 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:37:10.819556  154493 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:37:11.017130  154493 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:37:11.018942  154493 out.go:235]   - Generating certificates and keys ...
	I0210 11:37:11.019087  154493 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:37:11.019223  154493 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:37:11.019348  154493 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:37:11.019455  154493 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:37:11.019557  154493 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:37:11.019638  154493 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:37:11.019736  154493 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:37:11.019852  154493 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:37:11.020253  154493 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:37:11.020632  154493 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:37:11.020693  154493 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:37:11.020773  154493 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:37:11.329120  154493 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:37:11.680498  154493 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:37:11.778558  154493 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:37:11.863517  154493 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:37:11.890035  154493 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:37:11.890175  154493 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:37:11.890237  154493 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:37:12.057790  154493 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:37:12.059443  154493 out.go:235]   - Booting up control plane ...
	I0210 11:37:12.059577  154493 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:37:12.059692  154493 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:37:12.060260  154493 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:37:12.060361  154493 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:37:12.070146  154493 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:37:52.073181  154493 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:37:52.073310  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:37:52.073532  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:37:57.074191  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:37:57.074464  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:38:07.074849  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:38:07.075165  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:38:27.074517  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:38:27.074806  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:39:07.074855  154493 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:39:07.075146  154493 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:39:07.075165  154493 kubeadm.go:310] 
	I0210 11:39:07.075254  154493 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:39:07.075310  154493 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:39:07.075321  154493 kubeadm.go:310] 
	I0210 11:39:07.075381  154493 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:39:07.075428  154493 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:39:07.075602  154493 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:39:07.075628  154493 kubeadm.go:310] 
	I0210 11:39:07.075791  154493 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:39:07.075832  154493 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:39:07.075878  154493 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:39:07.075888  154493 kubeadm.go:310] 
	I0210 11:39:07.076041  154493 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:39:07.076135  154493 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:39:07.076145  154493 kubeadm.go:310] 
	I0210 11:39:07.076290  154493 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:39:07.076426  154493 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:39:07.076551  154493 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:39:07.076646  154493 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:39:07.076658  154493 kubeadm.go:310] 
	I0210 11:39:07.077454  154493 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:39:07.077557  154493 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:39:07.077663  154493 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:39:07.077714  154493 kubeadm.go:394] duration metric: took 3m55.166924661s to StartCluster
	I0210 11:39:07.077768  154493 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:39:07.077832  154493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:39:07.128413  154493 cri.go:89] found id: ""
	I0210 11:39:07.128446  154493 logs.go:282] 0 containers: []
	W0210 11:39:07.128456  154493 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:39:07.128465  154493 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:39:07.128540  154493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:39:07.166883  154493 cri.go:89] found id: ""
	I0210 11:39:07.166913  154493 logs.go:282] 0 containers: []
	W0210 11:39:07.166925  154493 logs.go:284] No container was found matching "etcd"
	I0210 11:39:07.166933  154493 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:39:07.167006  154493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:39:07.212011  154493 cri.go:89] found id: ""
	I0210 11:39:07.212044  154493 logs.go:282] 0 containers: []
	W0210 11:39:07.212055  154493 logs.go:284] No container was found matching "coredns"
	I0210 11:39:07.212063  154493 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:39:07.212130  154493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:39:07.273703  154493 cri.go:89] found id: ""
	I0210 11:39:07.273737  154493 logs.go:282] 0 containers: []
	W0210 11:39:07.273750  154493 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:39:07.273758  154493 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:39:07.273818  154493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:39:07.311646  154493 cri.go:89] found id: ""
	I0210 11:39:07.311683  154493 logs.go:282] 0 containers: []
	W0210 11:39:07.311696  154493 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:39:07.311711  154493 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:39:07.311776  154493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:39:07.354631  154493 cri.go:89] found id: ""
	I0210 11:39:07.354667  154493 logs.go:282] 0 containers: []
	W0210 11:39:07.354678  154493 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:39:07.354687  154493 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:39:07.354750  154493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:39:07.387130  154493 cri.go:89] found id: ""
	I0210 11:39:07.387175  154493 logs.go:282] 0 containers: []
	W0210 11:39:07.387196  154493 logs.go:284] No container was found matching "kindnet"
	I0210 11:39:07.387218  154493 logs.go:123] Gathering logs for kubelet ...
	I0210 11:39:07.387233  154493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:39:07.461494  154493 logs.go:123] Gathering logs for dmesg ...
	I0210 11:39:07.461546  154493 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:39:07.477767  154493 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:39:07.477800  154493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:39:07.629209  154493 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:39:07.629237  154493 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:39:07.629253  154493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:39:07.745095  154493 logs.go:123] Gathering logs for container status ...
	I0210 11:39:07.745136  154493 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0210 11:39:07.790724  154493 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 11:39:07.790852  154493 out.go:270] * 
	* 
	W0210 11:39:07.790922  154493 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:39:07.790940  154493 out.go:270] * 
	* 
	W0210 11:39:07.791924  154493 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:39:07.794801  154493 out.go:201] 
	W0210 11:39:07.795733  154493 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:39:07.795782  154493 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 11:39:07.795810  154493 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 11:39:07.797040  154493 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-557458
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-557458: (6.371144859s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-557458 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-557458 status --format={{.Host}}: exit status 7 (84.178325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.94586089s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-557458 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (88.632363ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-557458] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-557458
	    minikube start -p kubernetes-upgrade-557458 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5574582 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-557458 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-557458 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.237133415s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-10 11:41:22.683600812 +0000 UTC m=+4107.453886779
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-557458 -n kubernetes-upgrade-557458
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-557458 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-557458 logs -n 25: (2.126252476s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475 sudo cat                | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475 sudo cat                | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475 sudo cat                | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-804475                         | enable-default-cni-804475 | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC | 10 Feb 25 11:40 UTC |
	| start   | -p old-k8s-version-510006                            | old-k8s-version-510006    | jenkins | v1.35.0 | 10 Feb 25 11:40 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-804475 pgrep -a                           | flannel-804475            | jenkins | v1.35.0 | 10 Feb 25 11:41 UTC | 10 Feb 25 11:41 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-804475 pgrep -a                            | bridge-804475             | jenkins | v1.35.0 | 10 Feb 25 11:41 UTC | 10 Feb 25 11:41 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 11:40:33
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 11:40:33.836746  166167 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:40:33.836847  166167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:40:33.836858  166167 out.go:358] Setting ErrFile to fd 2...
	I0210 11:40:33.836865  166167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:40:33.837045  166167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:40:33.837684  166167 out.go:352] Setting JSON to false
	I0210 11:40:33.838761  166167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8576,"bootTime":1739179058,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:40:33.838875  166167 start.go:139] virtualization: kvm guest
	I0210 11:40:33.840932  166167 out.go:177] * [old-k8s-version-510006] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:40:33.842170  166167 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:40:33.842200  166167 notify.go:220] Checking for updates...
	I0210 11:40:33.844559  166167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:40:33.845725  166167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:40:33.846881  166167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:40:33.847978  166167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:40:33.849154  166167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:40:33.851093  166167 config.go:182] Loaded profile config "bridge-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:33.851249  166167 config.go:182] Loaded profile config "flannel-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:33.851382  166167 config.go:182] Loaded profile config "kubernetes-upgrade-557458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:33.851509  166167 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:40:33.890654  166167 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 11:40:33.891961  166167 start.go:297] selected driver: kvm2
	I0210 11:40:33.891977  166167 start.go:901] validating driver "kvm2" against <nil>
	I0210 11:40:33.891988  166167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:40:33.892751  166167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:40:33.892855  166167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:40:33.911241  166167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:40:33.911307  166167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 11:40:33.911642  166167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:40:33.911682  166167 cni.go:84] Creating CNI manager for ""
	I0210 11:40:33.911742  166167 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:40:33.911755  166167 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 11:40:33.911839  166167 start.go:340] cluster config:
	{Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:40:33.911973  166167 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:40:33.914057  166167 out.go:177] * Starting "old-k8s-version-510006" primary control-plane node in "old-k8s-version-510006" cluster
	I0210 11:40:34.709939  162916 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 11:40:34.710031  162916 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:40:34.710135  162916 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:40:34.710268  162916 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:40:34.710428  162916 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 11:40:34.710494  162916 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:40:34.711929  162916 out.go:235]   - Generating certificates and keys ...
	I0210 11:40:34.712013  162916 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:40:34.712093  162916 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:40:34.712182  162916 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 11:40:34.712254  162916 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 11:40:34.712314  162916 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 11:40:34.712399  162916 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 11:40:34.712508  162916 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 11:40:34.712647  162916 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-804475 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0210 11:40:34.712728  162916 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 11:40:34.712869  162916 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-804475 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0210 11:40:34.712947  162916 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 11:40:34.713035  162916 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 11:40:34.713100  162916 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 11:40:34.713177  162916 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:40:34.713262  162916 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:40:34.713340  162916 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 11:40:34.713400  162916 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:40:34.713460  162916 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:40:34.713507  162916 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:40:34.713574  162916 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:40:34.713639  162916 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:40:34.715501  162916 out.go:235]   - Booting up control plane ...
	I0210 11:40:34.715625  162916 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:40:34.715693  162916 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:40:34.715785  162916 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:40:34.715946  162916 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:40:34.716026  162916 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:40:34.716060  162916 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:40:34.716172  162916 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 11:40:34.716263  162916 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 11:40:34.716322  162916 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001796322s
	I0210 11:40:34.716410  162916 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 11:40:34.716493  162916 kubeadm.go:310] [api-check] The API server is healthy after 5.502011647s
	I0210 11:40:34.716625  162916 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 11:40:34.716819  162916 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 11:40:34.716878  162916 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 11:40:34.717064  162916 kubeadm.go:310] [mark-control-plane] Marking the node flannel-804475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 11:40:34.717135  162916 kubeadm.go:310] [bootstrap-token] Using token: 7nba1y.2on4giqavum7cpsx
	I0210 11:40:34.718236  162916 out.go:235]   - Configuring RBAC rules ...
	I0210 11:40:34.718356  162916 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 11:40:34.718444  162916 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 11:40:34.718611  162916 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 11:40:34.718758  162916 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 11:40:34.718860  162916 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 11:40:34.718934  162916 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 11:40:34.719044  162916 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 11:40:34.719083  162916 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 11:40:34.719140  162916 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 11:40:34.719147  162916 kubeadm.go:310] 
	I0210 11:40:34.719235  162916 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 11:40:34.719242  162916 kubeadm.go:310] 
	I0210 11:40:34.719311  162916 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 11:40:34.719322  162916 kubeadm.go:310] 
	I0210 11:40:34.719362  162916 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 11:40:34.719451  162916 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 11:40:34.719532  162916 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 11:40:34.719545  162916 kubeadm.go:310] 
	I0210 11:40:34.719599  162916 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 11:40:34.719605  162916 kubeadm.go:310] 
	I0210 11:40:34.719646  162916 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 11:40:34.719652  162916 kubeadm.go:310] 
	I0210 11:40:34.719693  162916 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 11:40:34.719768  162916 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 11:40:34.719833  162916 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 11:40:34.719844  162916 kubeadm.go:310] 
	I0210 11:40:34.719915  162916 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 11:40:34.719986  162916 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 11:40:34.719996  162916 kubeadm.go:310] 
	I0210 11:40:34.720087  162916 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7nba1y.2on4giqavum7cpsx \
	I0210 11:40:34.720182  162916 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e691840e69ea7d304c7ca12f82f88a69682411454a0b34203921a76731659912 \
	I0210 11:40:34.720204  162916 kubeadm.go:310] 	--control-plane 
	I0210 11:40:34.720213  162916 kubeadm.go:310] 
	I0210 11:40:34.720286  162916 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 11:40:34.720293  162916 kubeadm.go:310] 
	I0210 11:40:34.720368  162916 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7nba1y.2on4giqavum7cpsx \
	I0210 11:40:34.720466  162916 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e691840e69ea7d304c7ca12f82f88a69682411454a0b34203921a76731659912 
	I0210 11:40:34.720480  162916 cni.go:84] Creating CNI manager for "flannel"
	I0210 11:40:34.721815  162916 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0210 11:40:34.723025  162916 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 11:40:34.728488  162916 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 11:40:34.728511  162916 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0210 11:40:34.748864  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 11:40:35.122813  162916 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 11:40:35.122933  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:35.123011  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-804475 minikube.k8s.io/updated_at=2025_02_10T11_40_35_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=flannel-804475 minikube.k8s.io/primary=true
	I0210 11:40:35.137186  162916 ops.go:34] apiserver oom_adj: -16
	I0210 11:40:35.290930  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:35.791768  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:36.291806  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:36.791019  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:33.597206  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:33.597645  164549 main.go:141] libmachine: (bridge-804475) DBG | unable to find current IP address of domain bridge-804475 in network mk-bridge-804475
	I0210 11:40:33.597670  164549 main.go:141] libmachine: (bridge-804475) DBG | I0210 11:40:33.597632  164787 retry.go:31] will retry after 3.028707357s: waiting for domain to come up
	I0210 11:40:36.628162  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:36.628644  164549 main.go:141] libmachine: (bridge-804475) DBG | unable to find current IP address of domain bridge-804475 in network mk-bridge-804475
	I0210 11:40:36.628676  164549 main.go:141] libmachine: (bridge-804475) DBG | I0210 11:40:36.628594  164787 retry.go:31] will retry after 4.796476612s: waiting for domain to come up
	I0210 11:40:33.915294  166167 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 11:40:33.915329  166167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 11:40:33.915336  166167 cache.go:56] Caching tarball of preloaded images
	I0210 11:40:33.915421  166167 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 11:40:33.915440  166167 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 11:40:33.915521  166167 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/config.json ...
	I0210 11:40:33.915538  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/config.json: {Name:mk754076024f66b063392bd8e7b86a0c5202ea5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:33.915665  166167 start.go:360] acquireMachinesLock for old-k8s-version-510006: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:40:37.291269  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:37.791008  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:38.291063  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:38.791823  162916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:40:38.920401  162916 kubeadm.go:1113] duration metric: took 3.797515413s to wait for elevateKubeSystemPrivileges
	I0210 11:40:38.920447  162916 kubeadm.go:394] duration metric: took 15.54532905s to StartCluster
	I0210 11:40:38.920471  162916 settings.go:142] acquiring lock: {Name:mk1369a4cca9eaf53282144d4cb555c048db8e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:38.920557  162916 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:40:38.921591  162916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:38.921851  162916 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 11:40:38.921866  162916 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 11:40:38.921940  162916 addons.go:69] Setting storage-provisioner=true in profile "flannel-804475"
	I0210 11:40:38.921844  162916 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:40:38.921962  162916 addons.go:69] Setting default-storageclass=true in profile "flannel-804475"
	I0210 11:40:38.921975  162916 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-804475"
	I0210 11:40:38.921957  162916 addons.go:238] Setting addon storage-provisioner=true in "flannel-804475"
	I0210 11:40:38.922069  162916 config.go:182] Loaded profile config "flannel-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:38.922107  162916 host.go:66] Checking if "flannel-804475" exists ...
	I0210 11:40:38.922411  162916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:38.922438  162916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:38.922567  162916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:38.922609  162916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:38.923482  162916 out.go:177] * Verifying Kubernetes components...
	I0210 11:40:38.924480  162916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:40:38.937647  162916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45217
	I0210 11:40:38.937652  162916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I0210 11:40:38.938134  162916 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:38.938150  162916 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:38.938698  162916 main.go:141] libmachine: Using API Version  1
	I0210 11:40:38.938716  162916 main.go:141] libmachine: Using API Version  1
	I0210 11:40:38.938754  162916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:38.938721  162916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:38.939202  162916 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:38.939202  162916 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:38.939419  162916 main.go:141] libmachine: (flannel-804475) Calling .GetState
	I0210 11:40:38.939846  162916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:38.939876  162916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:38.942892  162916 addons.go:238] Setting addon default-storageclass=true in "flannel-804475"
	I0210 11:40:38.942932  162916 host.go:66] Checking if "flannel-804475" exists ...
	I0210 11:40:38.943180  162916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:38.943235  162916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:38.958688  162916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0210 11:40:38.959309  162916 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:38.959995  162916 main.go:141] libmachine: Using API Version  1
	I0210 11:40:38.960025  162916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:38.960426  162916 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:38.960597  162916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0210 11:40:38.961043  162916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:38.961065  162916 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:38.961087  162916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:38.961620  162916 main.go:141] libmachine: Using API Version  1
	I0210 11:40:38.961643  162916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:38.961947  162916 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:38.962173  162916 main.go:141] libmachine: (flannel-804475) Calling .GetState
	I0210 11:40:38.964124  162916 main.go:141] libmachine: (flannel-804475) Calling .DriverName
	I0210 11:40:38.965798  162916 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:40:38.966888  162916 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:40:38.966912  162916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 11:40:38.966932  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHHostname
	I0210 11:40:38.970240  162916 main.go:141] libmachine: (flannel-804475) DBG | domain flannel-804475 has defined MAC address 52:54:00:e8:4a:e0 in network mk-flannel-804475
	I0210 11:40:38.970790  162916 main.go:141] libmachine: (flannel-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:4a:e0", ip: ""} in network mk-flannel-804475: {Iface:virbr4 ExpiryTime:2025-02-10 12:40:08 +0000 UTC Type:0 Mac:52:54:00:e8:4a:e0 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:flannel-804475 Clientid:01:52:54:00:e8:4a:e0}
	I0210 11:40:38.970823  162916 main.go:141] libmachine: (flannel-804475) DBG | domain flannel-804475 has defined IP address 192.168.72.158 and MAC address 52:54:00:e8:4a:e0 in network mk-flannel-804475
	I0210 11:40:38.971082  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHPort
	I0210 11:40:38.971285  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHKeyPath
	I0210 11:40:38.971470  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHUsername
	I0210 11:40:38.971633  162916 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/flannel-804475/id_rsa Username:docker}
	I0210 11:40:38.978745  162916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38085
	I0210 11:40:38.979105  162916 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:38.979601  162916 main.go:141] libmachine: Using API Version  1
	I0210 11:40:38.979628  162916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:38.979951  162916 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:38.980168  162916 main.go:141] libmachine: (flannel-804475) Calling .GetState
	I0210 11:40:38.981547  162916 main.go:141] libmachine: (flannel-804475) Calling .DriverName
	I0210 11:40:38.981744  162916 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 11:40:38.981767  162916 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 11:40:38.981782  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHHostname
	I0210 11:40:38.984843  162916 main.go:141] libmachine: (flannel-804475) DBG | domain flannel-804475 has defined MAC address 52:54:00:e8:4a:e0 in network mk-flannel-804475
	I0210 11:40:38.985328  162916 main.go:141] libmachine: (flannel-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:4a:e0", ip: ""} in network mk-flannel-804475: {Iface:virbr4 ExpiryTime:2025-02-10 12:40:08 +0000 UTC Type:0 Mac:52:54:00:e8:4a:e0 Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:flannel-804475 Clientid:01:52:54:00:e8:4a:e0}
	I0210 11:40:38.985348  162916 main.go:141] libmachine: (flannel-804475) DBG | domain flannel-804475 has defined IP address 192.168.72.158 and MAC address 52:54:00:e8:4a:e0 in network mk-flannel-804475
	I0210 11:40:38.985519  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHPort
	I0210 11:40:38.985652  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHKeyPath
	I0210 11:40:38.985786  162916 main.go:141] libmachine: (flannel-804475) Calling .GetSSHUsername
	I0210 11:40:38.985904  162916 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/flannel-804475/id_rsa Username:docker}
	I0210 11:40:39.226356  162916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:40:39.226526  162916 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 11:40:39.277068  162916 node_ready.go:35] waiting up to 15m0s for node "flannel-804475" to be "Ready" ...
	I0210 11:40:39.357193  162916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:40:39.358454  162916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:40:39.738143  162916 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0210 11:40:40.019734  162916 main.go:141] libmachine: Making call to close driver server
	I0210 11:40:40.019762  162916 main.go:141] libmachine: (flannel-804475) Calling .Close
	I0210 11:40:40.019813  162916 main.go:141] libmachine: Making call to close driver server
	I0210 11:40:40.019833  162916 main.go:141] libmachine: (flannel-804475) Calling .Close
	I0210 11:40:40.020106  162916 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:40:40.020124  162916 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:40:40.020133  162916 main.go:141] libmachine: Making call to close driver server
	I0210 11:40:40.020140  162916 main.go:141] libmachine: (flannel-804475) Calling .Close
	I0210 11:40:40.020189  162916 main.go:141] libmachine: (flannel-804475) DBG | Closing plugin on server side
	I0210 11:40:40.020240  162916 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:40:40.020272  162916 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:40:40.020282  162916 main.go:141] libmachine: Making call to close driver server
	I0210 11:40:40.020293  162916 main.go:141] libmachine: (flannel-804475) Calling .Close
	I0210 11:40:40.020371  162916 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:40:40.020390  162916 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:40:40.020416  162916 main.go:141] libmachine: (flannel-804475) DBG | Closing plugin on server side
	I0210 11:40:40.020543  162916 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:40:40.020569  162916 main.go:141] libmachine: (flannel-804475) DBG | Closing plugin on server side
	I0210 11:40:40.020572  162916 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:40:40.039316  162916 main.go:141] libmachine: Making call to close driver server
	I0210 11:40:40.039349  162916 main.go:141] libmachine: (flannel-804475) Calling .Close
	I0210 11:40:40.039588  162916 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:40:40.039605  162916 main.go:141] libmachine: (flannel-804475) DBG | Closing plugin on server side
	I0210 11:40:40.039611  162916 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:40:40.041908  162916 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0210 11:40:40.043085  162916 addons.go:514] duration metric: took 1.121214519s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0210 11:40:40.243999  162916 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-804475" context rescaled to 1 replicas
	I0210 11:40:41.280022  162916 node_ready.go:53] node "flannel-804475" has status "Ready":"False"
	I0210 11:40:41.428489  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:41.429256  164549 main.go:141] libmachine: (bridge-804475) found domain IP: 192.168.39.62
	I0210 11:40:41.429282  164549 main.go:141] libmachine: (bridge-804475) reserving static IP address...
	I0210 11:40:41.429296  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has current primary IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:41.429782  164549 main.go:141] libmachine: (bridge-804475) DBG | unable to find host DHCP lease matching {name: "bridge-804475", mac: "52:54:00:c6:bb:93", ip: "192.168.39.62"} in network mk-bridge-804475
	I0210 11:40:41.510555  164549 main.go:141] libmachine: (bridge-804475) DBG | Getting to WaitForSSH function...
	I0210 11:40:41.510596  164549 main.go:141] libmachine: (bridge-804475) reserved static IP address 192.168.39.62 for domain bridge-804475
	I0210 11:40:41.510613  164549 main.go:141] libmachine: (bridge-804475) waiting for SSH...
	I0210 11:40:41.513479  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:41.513778  164549 main.go:141] libmachine: (bridge-804475) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475
	I0210 11:40:41.513806  164549 main.go:141] libmachine: (bridge-804475) DBG | unable to find defined IP address of network mk-bridge-804475 interface with MAC address 52:54:00:c6:bb:93
	I0210 11:40:41.513950  164549 main.go:141] libmachine: (bridge-804475) DBG | Using SSH client type: external
	I0210 11:40:41.513973  164549 main.go:141] libmachine: (bridge-804475) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa (-rw-------)
	I0210 11:40:41.514007  164549 main.go:141] libmachine: (bridge-804475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:40:41.514025  164549 main.go:141] libmachine: (bridge-804475) DBG | About to run SSH command:
	I0210 11:40:41.514062  164549 main.go:141] libmachine: (bridge-804475) DBG | exit 0
	I0210 11:40:41.517755  164549 main.go:141] libmachine: (bridge-804475) DBG | SSH cmd err, output: exit status 255: 
	I0210 11:40:41.517780  164549 main.go:141] libmachine: (bridge-804475) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0210 11:40:41.517791  164549 main.go:141] libmachine: (bridge-804475) DBG | command : exit 0
	I0210 11:40:41.517808  164549 main.go:141] libmachine: (bridge-804475) DBG | err     : exit status 255
	I0210 11:40:41.517821  164549 main.go:141] libmachine: (bridge-804475) DBG | output  : 
	I0210 11:40:45.909149  164665 start.go:364] duration metric: took 35.299933632s to acquireMachinesLock for "kubernetes-upgrade-557458"
	I0210 11:40:45.909231  164665 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:40:45.909240  164665 fix.go:54] fixHost starting: 
	I0210 11:40:45.909683  164665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:45.909724  164665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:45.928859  164665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0210 11:40:45.929339  164665 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:45.929829  164665 main.go:141] libmachine: Using API Version  1
	I0210 11:40:45.929853  164665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:45.930202  164665 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:45.930446  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:40:45.930617  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetState
	I0210 11:40:45.932327  164665 fix.go:112] recreateIfNeeded on kubernetes-upgrade-557458: state=Running err=<nil>
	W0210 11:40:45.932351  164665 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:40:45.934138  164665 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-557458" VM ...
	I0210 11:40:43.470885  162916 node_ready.go:53] node "flannel-804475" has status "Ready":"False"
	I0210 11:40:45.780797  162916 node_ready.go:53] node "flannel-804475" has status "Ready":"False"
	I0210 11:40:46.279791  162916 node_ready.go:49] node "flannel-804475" has status "Ready":"True"
	I0210 11:40:46.279817  162916 node_ready.go:38] duration metric: took 7.002704941s for node "flannel-804475" to be "Ready" ...
	I0210 11:40:46.279831  162916 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:40:46.282476  162916 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wd7wx" in "kube-system" namespace to be "Ready" ...
	I0210 11:40:44.519496  164549 main.go:141] libmachine: (bridge-804475) DBG | Getting to WaitForSSH function...
	I0210 11:40:44.522652  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.523072  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:44.523105  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.523238  164549 main.go:141] libmachine: (bridge-804475) DBG | Using SSH client type: external
	I0210 11:40:44.523268  164549 main.go:141] libmachine: (bridge-804475) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa (-rw-------)
	I0210 11:40:44.523346  164549 main.go:141] libmachine: (bridge-804475) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:40:44.523379  164549 main.go:141] libmachine: (bridge-804475) DBG | About to run SSH command:
	I0210 11:40:44.523397  164549 main.go:141] libmachine: (bridge-804475) DBG | exit 0
	I0210 11:40:44.651480  164549 main.go:141] libmachine: (bridge-804475) DBG | SSH cmd err, output: <nil>: 
	I0210 11:40:44.651772  164549 main.go:141] libmachine: (bridge-804475) KVM machine creation complete
	I0210 11:40:44.652155  164549 main.go:141] libmachine: (bridge-804475) Calling .GetConfigRaw
	I0210 11:40:44.652762  164549 main.go:141] libmachine: (bridge-804475) Calling .DriverName
	I0210 11:40:44.652998  164549 main.go:141] libmachine: (bridge-804475) Calling .DriverName
	I0210 11:40:44.653183  164549 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 11:40:44.653202  164549 main.go:141] libmachine: (bridge-804475) Calling .GetState
	I0210 11:40:44.654816  164549 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 11:40:44.654833  164549 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 11:40:44.654841  164549 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 11:40:44.654849  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:44.657777  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.658238  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:44.658272  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.658383  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:44.658571  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:44.658709  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:44.658870  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:44.659084  164549 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:44.659330  164549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0210 11:40:44.659343  164549 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 11:40:44.770436  164549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:40:44.770468  164549 main.go:141] libmachine: Detecting the provisioner...
	I0210 11:40:44.770479  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:44.773595  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.773997  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:44.774027  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.774187  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:44.774398  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:44.774576  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:44.774781  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:44.775044  164549 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:44.775259  164549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0210 11:40:44.775273  164549 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 11:40:44.887759  164549 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 11:40:44.887842  164549 main.go:141] libmachine: found compatible host: buildroot
	I0210 11:40:44.887852  164549 main.go:141] libmachine: Provisioning with buildroot...
	I0210 11:40:44.887860  164549 main.go:141] libmachine: (bridge-804475) Calling .GetMachineName
	I0210 11:40:44.888127  164549 buildroot.go:166] provisioning hostname "bridge-804475"
	I0210 11:40:44.888158  164549 main.go:141] libmachine: (bridge-804475) Calling .GetMachineName
	I0210 11:40:44.888379  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:44.891510  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.891917  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:44.891944  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:44.892124  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:44.892352  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:44.892525  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:44.892691  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:44.892867  164549 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:44.893092  164549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0210 11:40:44.893112  164549 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-804475 && echo "bridge-804475" | sudo tee /etc/hostname
	I0210 11:40:45.021300  164549 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-804475
	
	I0210 11:40:45.021339  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.024416  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.024845  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.024875  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.025132  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:45.025332  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.025519  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.025683  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:45.025880  164549 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:45.026088  164549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0210 11:40:45.026115  164549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-804475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-804475/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-804475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:40:45.141012  164549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:40:45.141049  164549 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:40:45.141081  164549 buildroot.go:174] setting up certificates
	I0210 11:40:45.141094  164549 provision.go:84] configureAuth start
	I0210 11:40:45.141111  164549 main.go:141] libmachine: (bridge-804475) Calling .GetMachineName
	I0210 11:40:45.141426  164549 main.go:141] libmachine: (bridge-804475) Calling .GetIP
	I0210 11:40:45.144692  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.145112  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.145142  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.145335  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.147736  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.148084  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.148123  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.148276  164549 provision.go:143] copyHostCerts
	I0210 11:40:45.148354  164549 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:40:45.148373  164549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:40:45.148437  164549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:40:45.148542  164549 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:40:45.148553  164549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:40:45.148585  164549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:40:45.148664  164549 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:40:45.148673  164549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:40:45.148705  164549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:40:45.148775  164549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.bridge-804475 san=[127.0.0.1 192.168.39.62 bridge-804475 localhost minikube]
	I0210 11:40:45.279687  164549 provision.go:177] copyRemoteCerts
	I0210 11:40:45.279768  164549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:40:45.279804  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.282497  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.282875  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.282910  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.283158  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:45.283404  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.283623  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:45.283806  164549 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa Username:docker}
	I0210 11:40:45.367129  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:40:45.396137  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 11:40:45.419800  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 11:40:45.442025  164549 provision.go:87] duration metric: took 300.911986ms to configureAuth
	I0210 11:40:45.442065  164549 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:40:45.442272  164549 config.go:182] Loaded profile config "bridge-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:45.442347  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.445279  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.445654  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.445680  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.445853  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:45.446025  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.446169  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.446273  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:45.446424  164549 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:45.446596  164549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0210 11:40:45.446611  164549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:40:45.665437  164549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:40:45.665469  164549 main.go:141] libmachine: Checking connection to Docker...
	I0210 11:40:45.665482  164549 main.go:141] libmachine: (bridge-804475) Calling .GetURL
	I0210 11:40:45.666938  164549 main.go:141] libmachine: (bridge-804475) DBG | using libvirt version 6000000
	I0210 11:40:45.669418  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.669758  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.669779  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.669949  164549 main.go:141] libmachine: Docker is up and running!
	I0210 11:40:45.669967  164549 main.go:141] libmachine: Reticulating splines...
	I0210 11:40:45.669977  164549 client.go:171] duration metric: took 28.583521024s to LocalClient.Create
	I0210 11:40:45.670013  164549 start.go:167] duration metric: took 28.583603579s to libmachine.API.Create "bridge-804475"
	I0210 11:40:45.670026  164549 start.go:293] postStartSetup for "bridge-804475" (driver="kvm2")
	I0210 11:40:45.670042  164549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:40:45.670081  164549 main.go:141] libmachine: (bridge-804475) Calling .DriverName
	I0210 11:40:45.670344  164549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:40:45.670369  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.672639  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.673068  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.673096  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.673294  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:45.673482  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.673646  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:45.673790  164549 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa Username:docker}
	I0210 11:40:45.757195  164549 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:40:45.760947  164549 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:40:45.760974  164549 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:40:45.761051  164549 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:40:45.761161  164549 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:40:45.761292  164549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:40:45.770158  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:40:45.793280  164549 start.go:296] duration metric: took 123.238506ms for postStartSetup
	I0210 11:40:45.793334  164549 main.go:141] libmachine: (bridge-804475) Calling .GetConfigRaw
	I0210 11:40:45.793923  164549 main.go:141] libmachine: (bridge-804475) Calling .GetIP
	I0210 11:40:45.796721  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.797149  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.797180  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.797483  164549 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/config.json ...
	I0210 11:40:45.797674  164549 start.go:128] duration metric: took 28.737087917s to createHost
	I0210 11:40:45.797699  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.800330  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.800637  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.800671  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.800848  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:45.801039  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.801223  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.801353  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:45.801522  164549 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:45.801765  164549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0210 11:40:45.801779  164549 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:40:45.908910  164549 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739187645.888082875
	
	I0210 11:40:45.908938  164549 fix.go:216] guest clock: 1739187645.888082875
	I0210 11:40:45.908949  164549 fix.go:229] Guest: 2025-02-10 11:40:45.888082875 +0000 UTC Remote: 2025-02-10 11:40:45.797686183 +0000 UTC m=+37.757922764 (delta=90.396692ms)
	I0210 11:40:45.908975  164549 fix.go:200] guest clock delta is within tolerance: 90.396692ms
	I0210 11:40:45.908982  164549 start.go:83] releasing machines lock for "bridge-804475", held for 28.84861883s
	I0210 11:40:45.909013  164549 main.go:141] libmachine: (bridge-804475) Calling .DriverName
	I0210 11:40:45.909333  164549 main.go:141] libmachine: (bridge-804475) Calling .GetIP
	I0210 11:40:45.912577  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.913074  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.913130  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.913309  164549 main.go:141] libmachine: (bridge-804475) Calling .DriverName
	I0210 11:40:45.914045  164549 main.go:141] libmachine: (bridge-804475) Calling .DriverName
	I0210 11:40:45.914287  164549 main.go:141] libmachine: (bridge-804475) Calling .DriverName
	I0210 11:40:45.914351  164549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:40:45.914408  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.914576  164549 ssh_runner.go:195] Run: cat /version.json
	I0210 11:40:45.914603  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHHostname
	I0210 11:40:45.917599  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.917697  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.918047  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.918084  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.918118  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:45.918135  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:45.918372  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:45.918390  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHPort
	I0210 11:40:45.918540  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.918704  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:45.918734  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHKeyPath
	I0210 11:40:45.918925  164549 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa Username:docker}
	I0210 11:40:45.919000  164549 main.go:141] libmachine: (bridge-804475) Calling .GetSSHUsername
	I0210 11:40:45.919121  164549 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/bridge-804475/id_rsa Username:docker}
	I0210 11:40:46.003651  164549 ssh_runner.go:195] Run: systemctl --version
	I0210 11:40:46.031168  164549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:40:46.191652  164549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:40:46.197773  164549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:40:46.197851  164549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:40:46.218275  164549 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:40:46.218306  164549 start.go:495] detecting cgroup driver to use...
	I0210 11:40:46.218390  164549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:40:46.238312  164549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:40:46.253275  164549 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:40:46.253347  164549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:40:46.266843  164549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:40:46.279981  164549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:40:46.421856  164549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:40:46.584636  164549 docker.go:233] disabling docker service ...
	I0210 11:40:46.584707  164549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:40:46.599026  164549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:40:46.612410  164549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:40:46.750673  164549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:40:46.880416  164549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:40:46.894582  164549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:40:46.914088  164549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 11:40:46.914188  164549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:46.923880  164549 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:40:46.923959  164549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:46.933660  164549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:46.942871  164549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:46.951987  164549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:40:46.961632  164549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:46.970528  164549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:46.985672  164549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:46.995035  164549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:40:47.003733  164549 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:40:47.003786  164549 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:40:47.015713  164549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:40:47.024867  164549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:40:47.141316  164549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:40:47.224346  164549 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:40:47.224438  164549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:40:47.229290  164549 start.go:563] Will wait 60s for crictl version
	I0210 11:40:47.229361  164549 ssh_runner.go:195] Run: which crictl
	I0210 11:40:47.233441  164549 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:40:47.270171  164549 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:40:47.270261  164549 ssh_runner.go:195] Run: crio --version
	I0210 11:40:47.296631  164549 ssh_runner.go:195] Run: crio --version
	I0210 11:40:47.328711  164549 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 11:40:47.329759  164549 main.go:141] libmachine: (bridge-804475) Calling .GetIP
	I0210 11:40:47.332215  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:47.332572  164549 main.go:141] libmachine: (bridge-804475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:bb:93", ip: ""} in network mk-bridge-804475: {Iface:virbr1 ExpiryTime:2025-02-10 12:40:33 +0000 UTC Type:0 Mac:52:54:00:c6:bb:93 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:bridge-804475 Clientid:01:52:54:00:c6:bb:93}
	I0210 11:40:47.332595  164549 main.go:141] libmachine: (bridge-804475) DBG | domain bridge-804475 has defined IP address 192.168.39.62 and MAC address 52:54:00:c6:bb:93 in network mk-bridge-804475
	I0210 11:40:47.332771  164549 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 11:40:47.336427  164549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:40:47.347751  164549 kubeadm.go:883] updating cluster {Name:bridge-804475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-804475 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:40:47.347852  164549 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:40:47.347891  164549 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:40:47.380885  164549 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 11:40:47.380965  164549 ssh_runner.go:195] Run: which lz4
	I0210 11:40:47.384722  164549 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:40:47.388589  164549 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:40:47.388623  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 11:40:45.935436  164665 machine.go:93] provisionDockerMachine start ...
	I0210 11:40:45.935461  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:40:45.935688  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:45.938614  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:45.939100  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:45.939121  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:45.939333  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:45.939522  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:45.939665  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:45.939840  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:45.940051  164665 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:45.940297  164665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:40:45.940312  164665 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:40:46.048445  164665 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-557458
	
	I0210 11:40:46.048480  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetMachineName
	I0210 11:40:46.048744  164665 buildroot.go:166] provisioning hostname "kubernetes-upgrade-557458"
	I0210 11:40:46.048781  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetMachineName
	I0210 11:40:46.048996  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:46.051850  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.052385  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:46.052418  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.052646  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:46.052888  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:46.053118  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:46.053277  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:46.053474  164665 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:46.053692  164665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:40:46.053706  164665 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-557458 && echo "kubernetes-upgrade-557458" | sudo tee /etc/hostname
	I0210 11:40:46.177683  164665 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-557458
	
	I0210 11:40:46.177729  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:46.180836  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.181320  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:46.181355  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.181580  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:46.181782  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:46.181969  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:46.182159  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:46.182349  164665 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:46.182560  164665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:40:46.182577  164665 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-557458' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-557458/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-557458' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:40:46.303917  164665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:40:46.303951  164665 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:40:46.303981  164665 buildroot.go:174] setting up certificates
	I0210 11:40:46.304000  164665 provision.go:84] configureAuth start
	I0210 11:40:46.304014  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetMachineName
	I0210 11:40:46.304333  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetIP
	I0210 11:40:46.308027  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.308541  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:46.308582  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.308908  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:46.318499  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.319073  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:46.319176  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.319316  164665 provision.go:143] copyHostCerts
	I0210 11:40:46.319404  164665 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:40:46.319423  164665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:40:46.319497  164665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:40:46.319616  164665 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:40:46.319623  164665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:40:46.319654  164665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:40:46.319728  164665 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:40:46.319736  164665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:40:46.319768  164665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:40:46.319837  164665 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-557458 san=[127.0.0.1 192.168.50.30 kubernetes-upgrade-557458 localhost minikube]
	I0210 11:40:46.539322  164665 provision.go:177] copyRemoteCerts
	I0210 11:40:46.539411  164665 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:40:46.539450  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:46.542237  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.542636  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:46.542671  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.542916  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:46.543158  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:46.543354  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:46.543546  164665 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:40:46.629238  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:40:46.656861  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:40:46.681357  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0210 11:40:46.714535  164665 provision.go:87] duration metric: took 410.51703ms to configureAuth
	I0210 11:40:46.714572  164665 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:40:46.714796  164665 config.go:182] Loaded profile config "kubernetes-upgrade-557458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:46.714896  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:46.717971  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.718309  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:46.718348  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:46.718656  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:46.718888  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:46.719038  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:46.719173  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:46.719367  164665 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:46.719573  164665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:40:46.719595  164665 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:40:48.289671  162916 pod_ready.go:103] pod "coredns-668d6bf9bc-wd7wx" in "kube-system" namespace has status "Ready":"False"
	I0210 11:40:50.789549  162916 pod_ready.go:103] pod "coredns-668d6bf9bc-wd7wx" in "kube-system" namespace has status "Ready":"False"
	I0210 11:40:48.639408  164549 crio.go:462] duration metric: took 1.25471038s to copy over tarball
	I0210 11:40:48.639494  164549 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:40:50.790179  164549 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.150656376s)
	I0210 11:40:50.790209  164549 crio.go:469] duration metric: took 2.150762875s to extract the tarball
	I0210 11:40:50.790218  164549 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:40:50.826707  164549 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:40:50.865476  164549 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 11:40:50.865501  164549 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:40:50.865509  164549 kubeadm.go:934] updating node { 192.168.39.62 8443 v1.32.1 crio true true} ...
	I0210 11:40:50.865643  164549 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-804475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-804475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0210 11:40:50.865737  164549 ssh_runner.go:195] Run: crio config
	I0210 11:40:50.922250  164549 cni.go:84] Creating CNI manager for "bridge"
	I0210 11:40:50.922275  164549 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:40:50.922297  164549 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-804475 NodeName:bridge-804475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:40:50.922430  164549 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-804475"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:40:50.922498  164549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:40:50.932143  164549 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:40:50.932213  164549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:40:50.941684  164549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0210 11:40:50.957540  164549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:40:50.972825  164549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0210 11:40:50.988868  164549 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I0210 11:40:50.992352  164549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:40:51.004006  164549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:40:51.112611  164549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:40:51.128460  164549 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475 for IP: 192.168.39.62
	I0210 11:40:51.128494  164549 certs.go:194] generating shared ca certs ...
	I0210 11:40:51.128529  164549 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:51.128712  164549 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:40:51.128785  164549 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:40:51.128804  164549 certs.go:256] generating profile certs ...
	I0210 11:40:51.128943  164549 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.key
	I0210 11:40:51.128966  164549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt with IP's: []
	I0210 11:40:51.218599  164549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt ...
	I0210 11:40:51.218631  164549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: {Name:mk57920492009b03809736621b48db76d865f90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:51.218818  164549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.key ...
	I0210 11:40:51.218833  164549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.key: {Name:mk09263714188c1050394f14ed0a587590f61266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:51.218939  164549 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.key.932efe22
	I0210 11:40:51.218956  164549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.crt.932efe22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.62]
	I0210 11:40:51.293656  164549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.crt.932efe22 ...
	I0210 11:40:51.293687  164549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.crt.932efe22: {Name:mkd384bdeef39fa04136b130ebafcb58f3f0e805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:51.293847  164549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.key.932efe22 ...
	I0210 11:40:51.293883  164549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.key.932efe22: {Name:mk40ee8da8c10d4984d978238f7d6792b6bd0982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:51.293965  164549 certs.go:381] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.crt.932efe22 -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.crt
	I0210 11:40:51.294052  164549 certs.go:385] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.key.932efe22 -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.key
	I0210 11:40:51.294111  164549 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.key
	I0210 11:40:51.294128  164549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.crt with IP's: []
	I0210 11:40:51.359114  164549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.crt ...
	I0210 11:40:51.359149  164549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.crt: {Name:mk5702152c9c28bfea55033d5ee5ebe3e4f3adec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:51.359362  164549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.key ...
	I0210 11:40:51.359380  164549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.key: {Name:mkbfc0c812ebd1310730b71b922a730e7d67dd70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:51.359575  164549 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:40:51.359617  164549 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:40:51.359630  164549 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:40:51.359656  164549 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:40:51.359682  164549 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:40:51.359707  164549 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:40:51.359748  164549 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:40:51.360327  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:40:51.385472  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:40:51.409762  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:40:51.432335  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:40:51.454860  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0210 11:40:51.478129  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:40:51.502865  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:40:51.526992  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 11:40:51.549460  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:40:51.571657  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:40:51.593799  164549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:40:51.615270  164549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:40:51.631045  164549 ssh_runner.go:195] Run: openssl version
	I0210 11:40:51.636451  164549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:40:51.646295  164549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:40:51.650528  164549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:40:51.650583  164549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:40:51.655914  164549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:40:51.665483  164549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:40:51.675240  164549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:40:51.679446  164549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:40:51.679505  164549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:40:51.685027  164549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:40:51.695339  164549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:40:51.705143  164549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:40:51.709220  164549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:40:51.709276  164549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:40:51.714933  164549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:40:51.725382  164549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:40:51.729216  164549 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:40:51.729281  164549 kubeadm.go:392] StartCluster: {Name:bridge-804475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-804475 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:40:51.729393  164549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:40:51.729443  164549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:40:51.766590  164549 cri.go:89] found id: ""
	I0210 11:40:51.766675  164549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:40:51.776202  164549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:40:51.785239  164549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:40:51.797660  164549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:40:51.797688  164549 kubeadm.go:157] found existing configuration files:
	
	I0210 11:40:51.797742  164549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:40:51.807024  164549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:40:51.807084  164549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:40:51.818245  164549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:40:51.828239  164549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:40:51.828296  164549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:40:51.839869  164549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:40:51.850867  164549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:40:51.850927  164549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:40:51.862458  164549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:40:51.871276  164549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:40:51.871350  164549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:40:51.880480  164549 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:40:52.040459  164549 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:40:54.743764  166167 start.go:364] duration metric: took 20.828064323s to acquireMachinesLock for "old-k8s-version-510006"
	I0210 11:40:54.743850  166167 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-510006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:40:54.743993  166167 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 11:40:54.515520  164665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:40:54.515552  164665 machine.go:96] duration metric: took 8.580101206s to provisionDockerMachine
	I0210 11:40:54.515565  164665 start.go:293] postStartSetup for "kubernetes-upgrade-557458" (driver="kvm2")
	I0210 11:40:54.515576  164665 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:40:54.515592  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:40:54.515963  164665 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:40:54.516001  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:54.518789  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.519217  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:54.519248  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.519456  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:54.519652  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:54.519829  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:54.519988  164665 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:40:54.600888  164665 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:40:54.604621  164665 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:40:54.604651  164665 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:40:54.604731  164665 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:40:54.604803  164665 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:40:54.604886  164665 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:40:54.613567  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:40:54.635398  164665 start.go:296] duration metric: took 119.819238ms for postStartSetup
	I0210 11:40:54.635445  164665 fix.go:56] duration metric: took 8.726205242s for fixHost
	I0210 11:40:54.635472  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:54.638269  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.638655  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:54.638685  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.638857  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:54.639046  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:54.639211  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:54.639346  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:54.639534  164665 main.go:141] libmachine: Using SSH client type: native
	I0210 11:40:54.639734  164665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0210 11:40:54.639746  164665 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:40:54.743617  164665 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739187654.733497262
	
	I0210 11:40:54.743642  164665 fix.go:216] guest clock: 1739187654.733497262
	I0210 11:40:54.743651  164665 fix.go:229] Guest: 2025-02-10 11:40:54.733497262 +0000 UTC Remote: 2025-02-10 11:40:54.63545147 +0000 UTC m=+44.183365961 (delta=98.045792ms)
	I0210 11:40:54.743677  164665 fix.go:200] guest clock delta is within tolerance: 98.045792ms
	I0210 11:40:54.743683  164665 start.go:83] releasing machines lock for "kubernetes-upgrade-557458", held for 8.834485001s
	I0210 11:40:54.743712  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:40:54.743980  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetIP
	I0210 11:40:54.747207  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.747641  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:54.747674  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.747855  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:40:54.748410  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:40:54.748646  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .DriverName
	I0210 11:40:54.748738  164665 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:40:54.748789  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:54.748865  164665 ssh_runner.go:195] Run: cat /version.json
	I0210 11:40:54.748894  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHHostname
	I0210 11:40:54.751930  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.752171  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.752301  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:54.752332  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.752558  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:54.752645  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:54.752673  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:54.752716  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:54.752889  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:54.752910  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHPort
	I0210 11:40:54.753132  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHKeyPath
	I0210 11:40:54.753211  164665 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:40:54.753306  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetSSHUsername
	I0210 11:40:54.753503  164665 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/kubernetes-upgrade-557458/id_rsa Username:docker}
	I0210 11:40:54.836379  164665 ssh_runner.go:195] Run: systemctl --version
	I0210 11:40:54.872880  164665 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:40:55.030979  164665 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:40:55.038589  164665 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:40:55.038668  164665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:40:55.051988  164665 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0210 11:40:55.052017  164665 start.go:495] detecting cgroup driver to use...
	I0210 11:40:55.052103  164665 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:40:55.069480  164665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:40:55.083626  164665 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:40:55.083727  164665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:40:55.097640  164665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:40:55.111286  164665 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:40:55.254161  164665 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:40:55.406564  164665 docker.go:233] disabling docker service ...
	I0210 11:40:55.406628  164665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:40:55.423785  164665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:40:55.438784  164665 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:40:53.035494  162916 pod_ready.go:103] pod "coredns-668d6bf9bc-wd7wx" in "kube-system" namespace has status "Ready":"False"
	I0210 11:40:55.289603  162916 pod_ready.go:103] pod "coredns-668d6bf9bc-wd7wx" in "kube-system" namespace has status "Ready":"False"
	I0210 11:40:55.577742  164665 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:40:55.719903  164665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:40:55.739498  164665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:40:55.791711  164665 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 11:40:55.791792  164665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:55.819538  164665 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:40:55.819624  164665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:55.839077  164665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:55.934883  164665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:55.962302  164665 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:40:56.096245  164665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:56.202245  164665 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:56.485468  164665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:40:56.565018  164665 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:40:56.619446  164665 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:40:56.704891  164665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:40:57.111546  164665 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:40:57.849038  164665 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:40:57.849139  164665 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:40:57.854426  164665 start.go:563] Will wait 60s for crictl version
	I0210 11:40:57.854492  164665 ssh_runner.go:195] Run: which crictl
	I0210 11:40:57.859296  164665 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:40:57.906657  164665 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:40:57.906760  164665 ssh_runner.go:195] Run: crio --version
	I0210 11:40:57.941888  164665 ssh_runner.go:195] Run: crio --version
	I0210 11:40:57.975404  164665 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 11:40:54.745767  166167 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 11:40:54.745976  166167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:54.746018  166167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:54.762803  166167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0210 11:40:54.763211  166167 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:54.763849  166167 main.go:141] libmachine: Using API Version  1
	I0210 11:40:54.763872  166167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:54.764204  166167 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:54.764423  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:40:54.764570  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:40:54.764779  166167 start.go:159] libmachine.API.Create for "old-k8s-version-510006" (driver="kvm2")
	I0210 11:40:54.764820  166167 client.go:168] LocalClient.Create starting
	I0210 11:40:54.764860  166167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem
	I0210 11:40:54.764907  166167 main.go:141] libmachine: Decoding PEM data...
	I0210 11:40:54.764928  166167 main.go:141] libmachine: Parsing certificate...
	I0210 11:40:54.765003  166167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem
	I0210 11:40:54.765032  166167 main.go:141] libmachine: Decoding PEM data...
	I0210 11:40:54.765046  166167 main.go:141] libmachine: Parsing certificate...
	I0210 11:40:54.765059  166167 main.go:141] libmachine: Running pre-create checks...
	I0210 11:40:54.765069  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .PreCreateCheck
	I0210 11:40:54.765488  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetConfigRaw
	I0210 11:40:54.765934  166167 main.go:141] libmachine: Creating machine...
	I0210 11:40:54.765954  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .Create
	I0210 11:40:54.766085  166167 main.go:141] libmachine: (old-k8s-version-510006) creating KVM machine...
	I0210 11:40:54.766114  166167 main.go:141] libmachine: (old-k8s-version-510006) creating network...
	I0210 11:40:54.767087  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found existing default KVM network
	I0210 11:40:54.768530  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.768372  166356 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a0:37:f0} reservation:<nil>}
	I0210 11:40:54.769367  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.769280  166356 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:f8:fc} reservation:<nil>}
	I0210 11:40:54.770503  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.770401  166356 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026d620}
	I0210 11:40:54.770528  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | created network xml: 
	I0210 11:40:54.770540  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | <network>
	I0210 11:40:54.770561  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   <name>mk-old-k8s-version-510006</name>
	I0210 11:40:54.770571  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   <dns enable='no'/>
	I0210 11:40:54.770577  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   
	I0210 11:40:54.770599  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0210 11:40:54.770635  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |     <dhcp>
	I0210 11:40:54.770648  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0210 11:40:54.770655  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |     </dhcp>
	I0210 11:40:54.770661  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   </ip>
	I0210 11:40:54.770665  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   
	I0210 11:40:54.770680  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | </network>
	I0210 11:40:54.770687  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | 
	I0210 11:40:54.775561  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | trying to create private KVM network mk-old-k8s-version-510006 192.168.61.0/24...
	I0210 11:40:54.851626  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | private KVM network mk-old-k8s-version-510006 192.168.61.0/24 created
	I0210 11:40:54.851660  166167 main.go:141] libmachine: (old-k8s-version-510006) setting up store path in /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006 ...
	I0210 11:40:54.851674  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.851602  166356 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:40:54.851691  166167 main.go:141] libmachine: (old-k8s-version-510006) building disk image from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 11:40:54.851815  166167 main.go:141] libmachine: (old-k8s-version-510006) Downloading /home/jenkins/minikube-integration/20385-109271/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:40:55.111266  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:55.111102  166356 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa...
	I0210 11:40:55.176624  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:55.176475  166356 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/old-k8s-version-510006.rawdisk...
	I0210 11:40:55.176662  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | Writing magic tar header
	I0210 11:40:55.176680  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | Writing SSH key tar header
	I0210 11:40:55.176693  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:55.176628  166356 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006 ...
	I0210 11:40:55.176784  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006
	I0210 11:40:55.176841  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines
	I0210 11:40:55.176867  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006 (perms=drwx------)
	I0210 11:40:55.176894  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:40:55.176924  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines (perms=drwxr-xr-x)
	I0210 11:40:55.176948  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271
	I0210 11:40:55.176964  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 11:40:55.176977  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins
	I0210 11:40:55.176988  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube (perms=drwxr-xr-x)
	I0210 11:40:55.177003  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home
	I0210 11:40:55.177013  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271 (perms=drwxrwxr-x)
	I0210 11:40:55.177031  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 11:40:55.177044  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 11:40:55.177058  166167 main.go:141] libmachine: (old-k8s-version-510006) creating domain...
	I0210 11:40:55.177069  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | skipping /home - not owner
	I0210 11:40:55.178351  166167 main.go:141] libmachine: (old-k8s-version-510006) define libvirt domain using xml: 
	I0210 11:40:55.178373  166167 main.go:141] libmachine: (old-k8s-version-510006) <domain type='kvm'>
	I0210 11:40:55.178384  166167 main.go:141] libmachine: (old-k8s-version-510006)   <name>old-k8s-version-510006</name>
	I0210 11:40:55.178392  166167 main.go:141] libmachine: (old-k8s-version-510006)   <memory unit='MiB'>2200</memory>
	I0210 11:40:55.178400  166167 main.go:141] libmachine: (old-k8s-version-510006)   <vcpu>2</vcpu>
	I0210 11:40:55.178426  166167 main.go:141] libmachine: (old-k8s-version-510006)   <features>
	I0210 11:40:55.178441  166167 main.go:141] libmachine: (old-k8s-version-510006)     <acpi/>
	I0210 11:40:55.178454  166167 main.go:141] libmachine: (old-k8s-version-510006)     <apic/>
	I0210 11:40:55.178465  166167 main.go:141] libmachine: (old-k8s-version-510006)     <pae/>
	I0210 11:40:55.178472  166167 main.go:141] libmachine: (old-k8s-version-510006)     
	I0210 11:40:55.178483  166167 main.go:141] libmachine: (old-k8s-version-510006)   </features>
	I0210 11:40:55.178491  166167 main.go:141] libmachine: (old-k8s-version-510006)   <cpu mode='host-passthrough'>
	I0210 11:40:55.178502  166167 main.go:141] libmachine: (old-k8s-version-510006)   
	I0210 11:40:55.178515  166167 main.go:141] libmachine: (old-k8s-version-510006)   </cpu>
	I0210 11:40:55.178523  166167 main.go:141] libmachine: (old-k8s-version-510006)   <os>
	I0210 11:40:55.178541  166167 main.go:141] libmachine: (old-k8s-version-510006)     <type>hvm</type>
	I0210 11:40:55.178567  166167 main.go:141] libmachine: (old-k8s-version-510006)     <boot dev='cdrom'/>
	I0210 11:40:55.178589  166167 main.go:141] libmachine: (old-k8s-version-510006)     <boot dev='hd'/>
	I0210 11:40:55.178601  166167 main.go:141] libmachine: (old-k8s-version-510006)     <bootmenu enable='no'/>
	I0210 11:40:55.178610  166167 main.go:141] libmachine: (old-k8s-version-510006)   </os>
	I0210 11:40:55.178617  166167 main.go:141] libmachine: (old-k8s-version-510006)   <devices>
	I0210 11:40:55.178630  166167 main.go:141] libmachine: (old-k8s-version-510006)     <disk type='file' device='cdrom'>
	I0210 11:40:55.178644  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/boot2docker.iso'/>
	I0210 11:40:55.178664  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target dev='hdc' bus='scsi'/>
	I0210 11:40:55.178680  166167 main.go:141] libmachine: (old-k8s-version-510006)       <readonly/>
	I0210 11:40:55.178714  166167 main.go:141] libmachine: (old-k8s-version-510006)     </disk>
	I0210 11:40:55.178734  166167 main.go:141] libmachine: (old-k8s-version-510006)     <disk type='file' device='disk'>
	I0210 11:40:55.178761  166167 main.go:141] libmachine: (old-k8s-version-510006)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 11:40:55.178778  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/old-k8s-version-510006.rawdisk'/>
	I0210 11:40:55.178792  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target dev='hda' bus='virtio'/>
	I0210 11:40:55.178803  166167 main.go:141] libmachine: (old-k8s-version-510006)     </disk>
	I0210 11:40:55.178816  166167 main.go:141] libmachine: (old-k8s-version-510006)     <interface type='network'>
	I0210 11:40:55.178829  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source network='mk-old-k8s-version-510006'/>
	I0210 11:40:55.178842  166167 main.go:141] libmachine: (old-k8s-version-510006)       <model type='virtio'/>
	I0210 11:40:55.178852  166167 main.go:141] libmachine: (old-k8s-version-510006)     </interface>
	I0210 11:40:55.178862  166167 main.go:141] libmachine: (old-k8s-version-510006)     <interface type='network'>
	I0210 11:40:55.178874  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source network='default'/>
	I0210 11:40:55.178884  166167 main.go:141] libmachine: (old-k8s-version-510006)       <model type='virtio'/>
	I0210 11:40:55.178894  166167 main.go:141] libmachine: (old-k8s-version-510006)     </interface>
	I0210 11:40:55.178904  166167 main.go:141] libmachine: (old-k8s-version-510006)     <serial type='pty'>
	I0210 11:40:55.178914  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target port='0'/>
	I0210 11:40:55.178922  166167 main.go:141] libmachine: (old-k8s-version-510006)     </serial>
	I0210 11:40:55.178933  166167 main.go:141] libmachine: (old-k8s-version-510006)     <console type='pty'>
	I0210 11:40:55.178945  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target type='serial' port='0'/>
	I0210 11:40:55.178956  166167 main.go:141] libmachine: (old-k8s-version-510006)     </console>
	I0210 11:40:55.178969  166167 main.go:141] libmachine: (old-k8s-version-510006)     <rng model='virtio'>
	I0210 11:40:55.178981  166167 main.go:141] libmachine: (old-k8s-version-510006)       <backend model='random'>/dev/random</backend>
	I0210 11:40:55.178990  166167 main.go:141] libmachine: (old-k8s-version-510006)     </rng>
	I0210 11:40:55.179000  166167 main.go:141] libmachine: (old-k8s-version-510006)     
	I0210 11:40:55.179008  166167 main.go:141] libmachine: (old-k8s-version-510006)     
	I0210 11:40:55.179018  166167 main.go:141] libmachine: (old-k8s-version-510006)   </devices>
	I0210 11:40:55.179026  166167 main.go:141] libmachine: (old-k8s-version-510006) </domain>
	I0210 11:40:55.179035  166167 main.go:141] libmachine: (old-k8s-version-510006) 
	I0210 11:40:55.183970  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:f1:bf:b8 in network default
	I0210 11:40:55.184659  166167 main.go:141] libmachine: (old-k8s-version-510006) starting domain...
	I0210 11:40:55.184681  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:55.184690  166167 main.go:141] libmachine: (old-k8s-version-510006) ensuring networks are active...
	I0210 11:40:55.185492  166167 main.go:141] libmachine: (old-k8s-version-510006) Ensuring network default is active
	I0210 11:40:55.185867  166167 main.go:141] libmachine: (old-k8s-version-510006) Ensuring network mk-old-k8s-version-510006 is active
	I0210 11:40:55.186453  166167 main.go:141] libmachine: (old-k8s-version-510006) getting domain XML...
	I0210 11:40:55.187329  166167 main.go:141] libmachine: (old-k8s-version-510006) creating domain...
	I0210 11:40:56.766544  166167 main.go:141] libmachine: (old-k8s-version-510006) waiting for IP...
	I0210 11:40:56.767626  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:56.768142  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:56.768214  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:56.768132  166356 retry.go:31] will retry after 276.457487ms: waiting for domain to come up
	I0210 11:40:57.046927  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:57.047743  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:57.047767  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:57.047653  166356 retry.go:31] will retry after 313.855375ms: waiting for domain to come up
	I0210 11:40:57.363765  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:57.364406  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:57.364432  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:57.364299  166356 retry.go:31] will retry after 332.646369ms: waiting for domain to come up
	I0210 11:40:57.699062  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:57.699734  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:57.699765  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:57.699708  166356 retry.go:31] will retry after 553.723279ms: waiting for domain to come up
	I0210 11:40:58.255647  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:58.256293  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:58.256321  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:58.256241  166356 retry.go:31] will retry after 601.932201ms: waiting for domain to come up
	I0210 11:40:57.976522  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) Calling .GetIP
	I0210 11:40:57.979690  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:57.980187  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:eb:a6", ip: ""} in network mk-kubernetes-upgrade-557458: {Iface:virbr2 ExpiryTime:2025-02-10 12:39:44 +0000 UTC Type:0 Mac:52:54:00:4b:eb:a6 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:kubernetes-upgrade-557458 Clientid:01:52:54:00:4b:eb:a6}
	I0210 11:40:57.980225  164665 main.go:141] libmachine: (kubernetes-upgrade-557458) DBG | domain kubernetes-upgrade-557458 has defined IP address 192.168.50.30 and MAC address 52:54:00:4b:eb:a6 in network mk-kubernetes-upgrade-557458
	I0210 11:40:57.980476  164665 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0210 11:40:57.985232  164665 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-557458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-557458 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:40:57.985379  164665 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:40:57.985443  164665 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:40:58.039262  164665 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 11:40:58.039293  164665 crio.go:433] Images already preloaded, skipping extraction
	I0210 11:40:58.039358  164665 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:40:58.074911  164665 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 11:40:58.074941  164665 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:40:58.074950  164665 kubeadm.go:934] updating node { 192.168.50.30 8443 v1.32.1 crio true true} ...
	I0210 11:40:58.075099  164665 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-557458 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-557458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:40:58.075226  164665 ssh_runner.go:195] Run: crio config
	I0210 11:40:58.482601  164665 cni.go:84] Creating CNI manager for ""
	I0210 11:40:58.482632  164665 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:40:58.482646  164665 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:40:58.482677  164665 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-557458 NodeName:kubernetes-upgrade-557458 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:40:58.482879  164665 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-557458"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:40:58.482961  164665 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:40:58.594119  164665 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:40:58.594219  164665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:40:58.645745  164665 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0210 11:40:58.689592  164665 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:40:58.731173  164665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0210 11:40:58.757023  164665 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0210 11:40:58.765152  164665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:40:58.967893  164665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:40:59.014336  164665 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458 for IP: 192.168.50.30
	I0210 11:40:59.014379  164665 certs.go:194] generating shared ca certs ...
	I0210 11:40:59.014403  164665 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:59.014595  164665 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:40:59.014647  164665 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:40:59.014659  164665 certs.go:256] generating profile certs ...
	I0210 11:40:59.014798  164665 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/client.key
	I0210 11:40:59.014867  164665 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key.3052cc4e
	I0210 11:40:59.014916  164665 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.key
	I0210 11:40:59.015065  164665 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:40:59.015105  164665 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:40:59.015120  164665 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:40:59.015151  164665 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:40:59.015211  164665 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:40:59.015248  164665 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:40:59.015305  164665 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:40:59.016213  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:40:59.058612  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:40:59.124365  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:40:59.171317  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:40:59.257267  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0210 11:40:59.303486  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:40:59.387605  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:40:59.434592  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kubernetes-upgrade-557458/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:40:59.471554  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:40:59.498986  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:40:59.524394  164665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:40:59.549225  164665 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:40:59.565641  164665 ssh_runner.go:195] Run: openssl version
	I0210 11:40:59.572102  164665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:40:59.582293  164665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:40:59.588017  164665 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:40:59.588068  164665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:40:59.595165  164665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:40:59.605032  164665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:40:59.616796  164665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:40:59.621610  164665 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:40:59.621665  164665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:40:59.627383  164665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:40:59.636633  164665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:40:59.647369  164665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:40:59.651793  164665 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:40:59.651849  164665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:40:59.657702  164665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:40:59.666899  164665 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:40:59.671407  164665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 11:40:59.677072  164665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 11:40:59.682398  164665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 11:40:59.687710  164665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 11:40:59.693118  164665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 11:40:59.698372  164665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 11:40:59.703694  164665 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-557458 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-557458 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:40:59.703799  164665 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:40:59.703849  164665 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:40:59.747104  164665 cri.go:89] found id: "b16bce0dd2c0849d0a41f964fa32c36404a733321329b290e68571eaed6126cd"
	I0210 11:40:59.747133  164665 cri.go:89] found id: "6cdf92b49a176aa95e07eaaf90bd28505690726abf266a66843c930cc74bc078"
	I0210 11:40:59.747139  164665 cri.go:89] found id: "ea28d2442edf62c108493c6c994d2f6db12571e603b09f928feb5dd87836db2c"
	I0210 11:40:59.747144  164665 cri.go:89] found id: "d4e9ba4a215ce4441650703041c8c51bca6b828e260028ac23401c7bf9452c8e"
	I0210 11:40:59.747148  164665 cri.go:89] found id: "0e4a79f49195e7d7d754fbfac6a46b1000cad1277a6670ddbb86589e111f9bf4"
	I0210 11:40:59.747151  164665 cri.go:89] found id: "50958fc1bdc80e183b9023393b6ce4318ce635c19b7215f38ab9bbc94357fe2d"
	I0210 11:40:59.747155  164665 cri.go:89] found id: "e5cc7713710eaccd0a27e0d3ec0b9d88e78fb2b54c04720a04a73cd4dc225c4e"
	I0210 11:40:59.747164  164665 cri.go:89] found id: "a66befd70e835c5bbef205b096a58f207f0095686c8da8e07cccf18903ad170c"
	I0210 11:40:59.747168  164665 cri.go:89] found id: "938cff461ca33a9710d0a037546da47f57950d843f10d3a33efcf7ff20ddbc6d"
	I0210 11:40:59.747176  164665 cri.go:89] found id: "ac4ffbdc01ecf8556dda7e8854eea974fc6e96d98e98b96c5c8d083c68c0e144"
	I0210 11:40:59.747180  164665 cri.go:89] found id: ""
	I0210 11:40:59.747250  164665 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-557458 -n kubernetes-upgrade-557458
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-557458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-557458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-557458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-557458: (1.30528474s)
--- FAIL: TestKubernetesUpgrade (412.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (295.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-510006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0210 11:40:53.023562  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-510006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m54.635527216s)

                                                
                                                
-- stdout --
	* [old-k8s-version-510006] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-510006" primary control-plane node in "old-k8s-version-510006" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:40:33.836746  166167 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:40:33.836847  166167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:40:33.836858  166167 out.go:358] Setting ErrFile to fd 2...
	I0210 11:40:33.836865  166167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:40:33.837045  166167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:40:33.837684  166167 out.go:352] Setting JSON to false
	I0210 11:40:33.838761  166167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8576,"bootTime":1739179058,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:40:33.838875  166167 start.go:139] virtualization: kvm guest
	I0210 11:40:33.840932  166167 out.go:177] * [old-k8s-version-510006] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:40:33.842170  166167 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:40:33.842200  166167 notify.go:220] Checking for updates...
	I0210 11:40:33.844559  166167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:40:33.845725  166167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:40:33.846881  166167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:40:33.847978  166167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:40:33.849154  166167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:40:33.851093  166167 config.go:182] Loaded profile config "bridge-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:33.851249  166167 config.go:182] Loaded profile config "flannel-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:33.851382  166167 config.go:182] Loaded profile config "kubernetes-upgrade-557458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:40:33.851509  166167 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:40:33.890654  166167 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 11:40:33.891961  166167 start.go:297] selected driver: kvm2
	I0210 11:40:33.891977  166167 start.go:901] validating driver "kvm2" against <nil>
	I0210 11:40:33.891988  166167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:40:33.892751  166167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:40:33.892855  166167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:40:33.911241  166167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:40:33.911307  166167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 11:40:33.911642  166167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:40:33.911682  166167 cni.go:84] Creating CNI manager for ""
	I0210 11:40:33.911742  166167 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:40:33.911755  166167 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 11:40:33.911839  166167 start.go:340] cluster config:
	{Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:40:33.911973  166167 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:40:33.914057  166167 out.go:177] * Starting "old-k8s-version-510006" primary control-plane node in "old-k8s-version-510006" cluster
	I0210 11:40:33.915294  166167 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 11:40:33.915329  166167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 11:40:33.915336  166167 cache.go:56] Caching tarball of preloaded images
	I0210 11:40:33.915421  166167 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 11:40:33.915440  166167 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 11:40:33.915521  166167 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/config.json ...
	I0210 11:40:33.915538  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/config.json: {Name:mk754076024f66b063392bd8e7b86a0c5202ea5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:40:33.915665  166167 start.go:360] acquireMachinesLock for old-k8s-version-510006: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:40:54.743764  166167 start.go:364] duration metric: took 20.828064323s to acquireMachinesLock for "old-k8s-version-510006"
	I0210 11:40:54.743850  166167 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-510006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:40:54.743993  166167 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 11:40:54.745767  166167 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 11:40:54.745976  166167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:40:54.746018  166167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:40:54.762803  166167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0210 11:40:54.763211  166167 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:40:54.763849  166167 main.go:141] libmachine: Using API Version  1
	I0210 11:40:54.763872  166167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:40:54.764204  166167 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:40:54.764423  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:40:54.764570  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:40:54.764779  166167 start.go:159] libmachine.API.Create for "old-k8s-version-510006" (driver="kvm2")
	I0210 11:40:54.764820  166167 client.go:168] LocalClient.Create starting
	I0210 11:40:54.764860  166167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem
	I0210 11:40:54.764907  166167 main.go:141] libmachine: Decoding PEM data...
	I0210 11:40:54.764928  166167 main.go:141] libmachine: Parsing certificate...
	I0210 11:40:54.765003  166167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem
	I0210 11:40:54.765032  166167 main.go:141] libmachine: Decoding PEM data...
	I0210 11:40:54.765046  166167 main.go:141] libmachine: Parsing certificate...
	I0210 11:40:54.765059  166167 main.go:141] libmachine: Running pre-create checks...
	I0210 11:40:54.765069  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .PreCreateCheck
	I0210 11:40:54.765488  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetConfigRaw
	I0210 11:40:54.765934  166167 main.go:141] libmachine: Creating machine...
	I0210 11:40:54.765954  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .Create
	I0210 11:40:54.766085  166167 main.go:141] libmachine: (old-k8s-version-510006) creating KVM machine...
	I0210 11:40:54.766114  166167 main.go:141] libmachine: (old-k8s-version-510006) creating network...
	I0210 11:40:54.767087  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found existing default KVM network
	I0210 11:40:54.768530  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.768372  166356 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a0:37:f0} reservation:<nil>}
	I0210 11:40:54.769367  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.769280  166356 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:f8:fc} reservation:<nil>}
	I0210 11:40:54.770503  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.770401  166356 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026d620}
	I0210 11:40:54.770528  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | created network xml: 
	I0210 11:40:54.770540  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | <network>
	I0210 11:40:54.770561  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   <name>mk-old-k8s-version-510006</name>
	I0210 11:40:54.770571  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   <dns enable='no'/>
	I0210 11:40:54.770577  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   
	I0210 11:40:54.770599  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0210 11:40:54.770635  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |     <dhcp>
	I0210 11:40:54.770648  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0210 11:40:54.770655  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |     </dhcp>
	I0210 11:40:54.770661  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   </ip>
	I0210 11:40:54.770665  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG |   
	I0210 11:40:54.770680  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | </network>
	I0210 11:40:54.770687  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | 
	I0210 11:40:54.775561  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | trying to create private KVM network mk-old-k8s-version-510006 192.168.61.0/24...
	I0210 11:40:54.851626  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | private KVM network mk-old-k8s-version-510006 192.168.61.0/24 created
	I0210 11:40:54.851660  166167 main.go:141] libmachine: (old-k8s-version-510006) setting up store path in /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006 ...
	I0210 11:40:54.851674  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:54.851602  166356 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:40:54.851691  166167 main.go:141] libmachine: (old-k8s-version-510006) building disk image from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 11:40:54.851815  166167 main.go:141] libmachine: (old-k8s-version-510006) Downloading /home/jenkins/minikube-integration/20385-109271/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:40:55.111266  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:55.111102  166356 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa...
	I0210 11:40:55.176624  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:55.176475  166356 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/old-k8s-version-510006.rawdisk...
	I0210 11:40:55.176662  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | Writing magic tar header
	I0210 11:40:55.176680  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | Writing SSH key tar header
	I0210 11:40:55.176693  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:55.176628  166356 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006 ...
	I0210 11:40:55.176784  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006
	I0210 11:40:55.176841  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube/machines
	I0210 11:40:55.176867  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006 (perms=drwx------)
	I0210 11:40:55.176894  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:40:55.176924  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube/machines (perms=drwxr-xr-x)
	I0210 11:40:55.176948  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20385-109271
	I0210 11:40:55.176964  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 11:40:55.176977  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home/jenkins
	I0210 11:40:55.176988  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271/.minikube (perms=drwxr-xr-x)
	I0210 11:40:55.177003  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | checking permissions on dir: /home
	I0210 11:40:55.177013  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration/20385-109271 (perms=drwxrwxr-x)
	I0210 11:40:55.177031  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 11:40:55.177044  166167 main.go:141] libmachine: (old-k8s-version-510006) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 11:40:55.177058  166167 main.go:141] libmachine: (old-k8s-version-510006) creating domain...
	I0210 11:40:55.177069  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | skipping /home - not owner
	I0210 11:40:55.178351  166167 main.go:141] libmachine: (old-k8s-version-510006) define libvirt domain using xml: 
	I0210 11:40:55.178373  166167 main.go:141] libmachine: (old-k8s-version-510006) <domain type='kvm'>
	I0210 11:40:55.178384  166167 main.go:141] libmachine: (old-k8s-version-510006)   <name>old-k8s-version-510006</name>
	I0210 11:40:55.178392  166167 main.go:141] libmachine: (old-k8s-version-510006)   <memory unit='MiB'>2200</memory>
	I0210 11:40:55.178400  166167 main.go:141] libmachine: (old-k8s-version-510006)   <vcpu>2</vcpu>
	I0210 11:40:55.178426  166167 main.go:141] libmachine: (old-k8s-version-510006)   <features>
	I0210 11:40:55.178441  166167 main.go:141] libmachine: (old-k8s-version-510006)     <acpi/>
	I0210 11:40:55.178454  166167 main.go:141] libmachine: (old-k8s-version-510006)     <apic/>
	I0210 11:40:55.178465  166167 main.go:141] libmachine: (old-k8s-version-510006)     <pae/>
	I0210 11:40:55.178472  166167 main.go:141] libmachine: (old-k8s-version-510006)     
	I0210 11:40:55.178483  166167 main.go:141] libmachine: (old-k8s-version-510006)   </features>
	I0210 11:40:55.178491  166167 main.go:141] libmachine: (old-k8s-version-510006)   <cpu mode='host-passthrough'>
	I0210 11:40:55.178502  166167 main.go:141] libmachine: (old-k8s-version-510006)   
	I0210 11:40:55.178515  166167 main.go:141] libmachine: (old-k8s-version-510006)   </cpu>
	I0210 11:40:55.178523  166167 main.go:141] libmachine: (old-k8s-version-510006)   <os>
	I0210 11:40:55.178541  166167 main.go:141] libmachine: (old-k8s-version-510006)     <type>hvm</type>
	I0210 11:40:55.178567  166167 main.go:141] libmachine: (old-k8s-version-510006)     <boot dev='cdrom'/>
	I0210 11:40:55.178589  166167 main.go:141] libmachine: (old-k8s-version-510006)     <boot dev='hd'/>
	I0210 11:40:55.178601  166167 main.go:141] libmachine: (old-k8s-version-510006)     <bootmenu enable='no'/>
	I0210 11:40:55.178610  166167 main.go:141] libmachine: (old-k8s-version-510006)   </os>
	I0210 11:40:55.178617  166167 main.go:141] libmachine: (old-k8s-version-510006)   <devices>
	I0210 11:40:55.178630  166167 main.go:141] libmachine: (old-k8s-version-510006)     <disk type='file' device='cdrom'>
	I0210 11:40:55.178644  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/boot2docker.iso'/>
	I0210 11:40:55.178664  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target dev='hdc' bus='scsi'/>
	I0210 11:40:55.178680  166167 main.go:141] libmachine: (old-k8s-version-510006)       <readonly/>
	I0210 11:40:55.178714  166167 main.go:141] libmachine: (old-k8s-version-510006)     </disk>
	I0210 11:40:55.178734  166167 main.go:141] libmachine: (old-k8s-version-510006)     <disk type='file' device='disk'>
	I0210 11:40:55.178761  166167 main.go:141] libmachine: (old-k8s-version-510006)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 11:40:55.178778  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source file='/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/old-k8s-version-510006.rawdisk'/>
	I0210 11:40:55.178792  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target dev='hda' bus='virtio'/>
	I0210 11:40:55.178803  166167 main.go:141] libmachine: (old-k8s-version-510006)     </disk>
	I0210 11:40:55.178816  166167 main.go:141] libmachine: (old-k8s-version-510006)     <interface type='network'>
	I0210 11:40:55.178829  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source network='mk-old-k8s-version-510006'/>
	I0210 11:40:55.178842  166167 main.go:141] libmachine: (old-k8s-version-510006)       <model type='virtio'/>
	I0210 11:40:55.178852  166167 main.go:141] libmachine: (old-k8s-version-510006)     </interface>
	I0210 11:40:55.178862  166167 main.go:141] libmachine: (old-k8s-version-510006)     <interface type='network'>
	I0210 11:40:55.178874  166167 main.go:141] libmachine: (old-k8s-version-510006)       <source network='default'/>
	I0210 11:40:55.178884  166167 main.go:141] libmachine: (old-k8s-version-510006)       <model type='virtio'/>
	I0210 11:40:55.178894  166167 main.go:141] libmachine: (old-k8s-version-510006)     </interface>
	I0210 11:40:55.178904  166167 main.go:141] libmachine: (old-k8s-version-510006)     <serial type='pty'>
	I0210 11:40:55.178914  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target port='0'/>
	I0210 11:40:55.178922  166167 main.go:141] libmachine: (old-k8s-version-510006)     </serial>
	I0210 11:40:55.178933  166167 main.go:141] libmachine: (old-k8s-version-510006)     <console type='pty'>
	I0210 11:40:55.178945  166167 main.go:141] libmachine: (old-k8s-version-510006)       <target type='serial' port='0'/>
	I0210 11:40:55.178956  166167 main.go:141] libmachine: (old-k8s-version-510006)     </console>
	I0210 11:40:55.178969  166167 main.go:141] libmachine: (old-k8s-version-510006)     <rng model='virtio'>
	I0210 11:40:55.178981  166167 main.go:141] libmachine: (old-k8s-version-510006)       <backend model='random'>/dev/random</backend>
	I0210 11:40:55.178990  166167 main.go:141] libmachine: (old-k8s-version-510006)     </rng>
	I0210 11:40:55.179000  166167 main.go:141] libmachine: (old-k8s-version-510006)     
	I0210 11:40:55.179008  166167 main.go:141] libmachine: (old-k8s-version-510006)     
	I0210 11:40:55.179018  166167 main.go:141] libmachine: (old-k8s-version-510006)   </devices>
	I0210 11:40:55.179026  166167 main.go:141] libmachine: (old-k8s-version-510006) </domain>
	I0210 11:40:55.179035  166167 main.go:141] libmachine: (old-k8s-version-510006) 
	I0210 11:40:55.183970  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:f1:bf:b8 in network default
	I0210 11:40:55.184659  166167 main.go:141] libmachine: (old-k8s-version-510006) starting domain...
	I0210 11:40:55.184681  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:55.184690  166167 main.go:141] libmachine: (old-k8s-version-510006) ensuring networks are active...
	I0210 11:40:55.185492  166167 main.go:141] libmachine: (old-k8s-version-510006) Ensuring network default is active
	I0210 11:40:55.185867  166167 main.go:141] libmachine: (old-k8s-version-510006) Ensuring network mk-old-k8s-version-510006 is active
	I0210 11:40:55.186453  166167 main.go:141] libmachine: (old-k8s-version-510006) getting domain XML...
	I0210 11:40:55.187329  166167 main.go:141] libmachine: (old-k8s-version-510006) creating domain...
	I0210 11:40:56.766544  166167 main.go:141] libmachine: (old-k8s-version-510006) waiting for IP...
	I0210 11:40:56.767626  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:56.768142  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:56.768214  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:56.768132  166356 retry.go:31] will retry after 276.457487ms: waiting for domain to come up
	I0210 11:40:57.046927  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:57.047743  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:57.047767  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:57.047653  166356 retry.go:31] will retry after 313.855375ms: waiting for domain to come up
	I0210 11:40:57.363765  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:57.364406  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:57.364432  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:57.364299  166356 retry.go:31] will retry after 332.646369ms: waiting for domain to come up
	I0210 11:40:57.699062  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:57.699734  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:57.699765  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:57.699708  166356 retry.go:31] will retry after 553.723279ms: waiting for domain to come up
	I0210 11:40:58.255647  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:58.256293  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:58.256321  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:58.256241  166356 retry.go:31] will retry after 601.932201ms: waiting for domain to come up
	I0210 11:40:58.861039  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:58.862190  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:58.862220  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:58.862157  166356 retry.go:31] will retry after 790.215455ms: waiting for domain to come up
	I0210 11:40:59.653446  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:40:59.653945  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:40:59.654017  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:40:59.653948  166356 retry.go:31] will retry after 1.174587269s: waiting for domain to come up
	I0210 11:41:00.830402  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:00.830964  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:41:00.831021  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:41:00.830943  166356 retry.go:31] will retry after 1.387688538s: waiting for domain to come up
	I0210 11:41:02.220592  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:02.221115  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:41:02.221139  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:41:02.221092  166356 retry.go:31] will retry after 1.360750997s: waiting for domain to come up
	I0210 11:41:03.583785  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:03.584298  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:41:03.584327  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:41:03.584257  166356 retry.go:31] will retry after 1.611350912s: waiting for domain to come up
	I0210 11:41:05.197083  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:05.197656  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:41:05.197679  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:41:05.197618  166356 retry.go:31] will retry after 2.811278567s: waiting for domain to come up
	I0210 11:41:08.012676  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:08.013205  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:41:08.013234  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:41:08.013167  166356 retry.go:31] will retry after 2.475673006s: waiting for domain to come up
	I0210 11:41:10.490144  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:10.490698  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:41:10.490729  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:41:10.490658  166356 retry.go:31] will retry after 4.311882097s: waiting for domain to come up
	I0210 11:41:14.804339  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:14.804899  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:41:14.804929  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:41:14.804877  166356 retry.go:31] will retry after 5.641913651s: waiting for domain to come up
	I0210 11:41:20.448245  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.448772  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has current primary IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.448806  166167 main.go:141] libmachine: (old-k8s-version-510006) found domain IP: 192.168.61.244
	I0210 11:41:20.448818  166167 main.go:141] libmachine: (old-k8s-version-510006) reserving static IP address...
	I0210 11:41:20.449197  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-510006", mac: "52:54:00:57:cc:39", ip: "192.168.61.244"} in network mk-old-k8s-version-510006
	I0210 11:41:20.529812  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | Getting to WaitForSSH function...
	I0210 11:41:20.529849  166167 main.go:141] libmachine: (old-k8s-version-510006) reserved static IP address 192.168.61.244 for domain old-k8s-version-510006
	I0210 11:41:20.529868  166167 main.go:141] libmachine: (old-k8s-version-510006) waiting for SSH...
	I0210 11:41:20.533599  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.534129  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:20.534158  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.534247  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | Using SSH client type: external
	I0210 11:41:20.534280  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa (-rw-------)
	I0210 11:41:20.534318  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:41:20.534328  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | About to run SSH command:
	I0210 11:41:20.534341  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | exit 0
	I0210 11:41:20.663940  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | SSH cmd err, output: <nil>: 
	I0210 11:41:20.664224  166167 main.go:141] libmachine: (old-k8s-version-510006) KVM machine creation complete
	I0210 11:41:20.664621  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetConfigRaw
	I0210 11:41:20.665329  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:41:20.665539  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:41:20.665695  166167 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 11:41:20.665716  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetState
	I0210 11:41:20.667317  166167 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 11:41:20.667335  166167 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 11:41:20.667342  166167 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 11:41:20.667358  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:20.670404  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.670882  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:20.670909  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.671044  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:20.671289  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:20.671458  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:20.671587  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:20.671738  166167 main.go:141] libmachine: Using SSH client type: native
	I0210 11:41:20.671988  166167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:41:20.672002  166167 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 11:41:20.782299  166167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:41:20.782328  166167 main.go:141] libmachine: Detecting the provisioner...
	I0210 11:41:20.782340  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:20.785817  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.786202  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:20.786232  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.786424  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:20.786629  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:20.786819  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:20.787013  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:20.787241  166167 main.go:141] libmachine: Using SSH client type: native
	I0210 11:41:20.787438  166167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:41:20.787449  166167 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 11:41:20.896966  166167 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 11:41:20.897036  166167 main.go:141] libmachine: found compatible host: buildroot
	I0210 11:41:20.897044  166167 main.go:141] libmachine: Provisioning with buildroot...
	I0210 11:41:20.897054  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:41:20.897320  166167 buildroot.go:166] provisioning hostname "old-k8s-version-510006"
	I0210 11:41:20.897346  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:41:20.897564  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:20.900602  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.901020  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:20.901056  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:20.901210  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:20.901403  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:20.901592  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:20.901739  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:20.901951  166167 main.go:141] libmachine: Using SSH client type: native
	I0210 11:41:20.902137  166167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:41:20.902157  166167 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510006 && echo "old-k8s-version-510006" | sudo tee /etc/hostname
	I0210 11:41:21.030779  166167 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510006
	
	I0210 11:41:21.030810  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:21.033575  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.034021  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.034055  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.034262  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:21.034480  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.034667  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.034819  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:21.035010  166167 main.go:141] libmachine: Using SSH client type: native
	I0210 11:41:21.035242  166167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:41:21.035270  166167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510006/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:41:21.148566  166167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:41:21.148625  166167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:41:21.148697  166167 buildroot.go:174] setting up certificates
	I0210 11:41:21.148711  166167 provision.go:84] configureAuth start
	I0210 11:41:21.148731  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:41:21.149018  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:41:21.152154  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.152505  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.152536  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.152704  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:21.155661  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.156027  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.156054  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.156289  166167 provision.go:143] copyHostCerts
	I0210 11:41:21.156370  166167 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:41:21.156390  166167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:41:21.156480  166167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:41:21.156642  166167 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:41:21.156657  166167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:41:21.156708  166167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:41:21.156810  166167 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:41:21.156821  166167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:41:21.156863  166167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:41:21.156981  166167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510006 san=[127.0.0.1 192.168.61.244 localhost minikube old-k8s-version-510006]
	I0210 11:41:21.372334  166167 provision.go:177] copyRemoteCerts
	I0210 11:41:21.372406  166167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:41:21.372433  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:21.375595  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.376129  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.376162  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.376380  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:21.376590  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.376820  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:21.376992  166167 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:41:21.460886  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 11:41:21.493843  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:41:21.526703  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:41:21.553480  166167 provision.go:87] duration metric: took 404.751859ms to configureAuth
	I0210 11:41:21.553511  166167 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:41:21.553713  166167 config.go:182] Loaded profile config "old-k8s-version-510006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 11:41:21.553820  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:21.556584  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.556910  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.556947  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.557237  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:21.557419  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.557564  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.557691  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:21.557860  166167 main.go:141] libmachine: Using SSH client type: native
	I0210 11:41:21.558071  166167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:41:21.558096  166167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:41:21.802607  166167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:41:21.802643  166167 main.go:141] libmachine: Checking connection to Docker...
	I0210 11:41:21.802653  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetURL
	I0210 11:41:21.803994  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | using libvirt version 6000000
	I0210 11:41:21.806514  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.806930  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.806963  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.807174  166167 main.go:141] libmachine: Docker is up and running!
	I0210 11:41:21.807209  166167 main.go:141] libmachine: Reticulating splines...
	I0210 11:41:21.807218  166167 client.go:171] duration metric: took 27.042385041s to LocalClient.Create
	I0210 11:41:21.807245  166167 start.go:167] duration metric: took 27.042468472s to libmachine.API.Create "old-k8s-version-510006"
	I0210 11:41:21.807259  166167 start.go:293] postStartSetup for "old-k8s-version-510006" (driver="kvm2")
	I0210 11:41:21.807270  166167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:41:21.807290  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:41:21.807552  166167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:41:21.807578  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:21.809944  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.810257  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.810282  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.810380  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:21.810564  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.810730  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:21.810885  166167 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:41:21.893646  166167 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:41:21.897965  166167 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:41:21.897996  166167 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:41:21.898124  166167 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:41:21.898225  166167 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:41:21.898339  166167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:41:21.907439  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:41:21.933187  166167 start.go:296] duration metric: took 125.916099ms for postStartSetup
	I0210 11:41:21.933233  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetConfigRaw
	I0210 11:41:21.933793  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:41:21.936328  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.936643  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.936675  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.936895  166167 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/config.json ...
	I0210 11:41:21.937088  166167 start.go:128] duration metric: took 27.193082062s to createHost
	I0210 11:41:21.937177  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:21.939771  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.940279  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:21.940310  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:21.940578  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:21.940755  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.940942  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:21.941096  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:21.941274  166167 main.go:141] libmachine: Using SSH client type: native
	I0210 11:41:21.941511  166167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:41:21.941529  166167 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:41:22.051888  166167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739187682.007358004
	
	I0210 11:41:22.051917  166167 fix.go:216] guest clock: 1739187682.007358004
	I0210 11:41:22.051928  166167 fix.go:229] Guest: 2025-02-10 11:41:22.007358004 +0000 UTC Remote: 2025-02-10 11:41:21.937098841 +0000 UTC m=+48.142791451 (delta=70.259163ms)
	I0210 11:41:22.051956  166167 fix.go:200] guest clock delta is within tolerance: 70.259163ms
	I0210 11:41:22.051963  166167 start.go:83] releasing machines lock for "old-k8s-version-510006", held for 27.308148524s
	I0210 11:41:22.051991  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:41:22.052262  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:41:22.055083  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:22.055414  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:22.055437  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:22.055625  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:41:22.056135  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:41:22.056310  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:41:22.056407  166167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:41:22.056455  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:22.056571  166167 ssh_runner.go:195] Run: cat /version.json
	I0210 11:41:22.056596  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:41:22.059385  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:22.059411  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:22.059774  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:22.059810  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:22.059839  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:22.059880  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:22.059968  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:22.060049  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:41:22.060117  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:22.060212  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:41:22.060258  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:22.060366  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:41:22.060443  166167 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:41:22.060491  166167 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:41:22.140396  166167 ssh_runner.go:195] Run: systemctl --version
	I0210 11:41:22.176741  166167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:41:22.337375  166167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:41:22.343857  166167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:41:22.343927  166167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:41:22.358812  166167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:41:22.358844  166167 start.go:495] detecting cgroup driver to use...
	I0210 11:41:22.358923  166167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:41:22.377476  166167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:41:22.392699  166167 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:41:22.392771  166167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:41:22.406407  166167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:41:22.419989  166167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:41:22.564581  166167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:41:22.767119  166167 docker.go:233] disabling docker service ...
	I0210 11:41:22.767220  166167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:41:22.784273  166167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:41:22.798257  166167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:41:22.928649  166167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:41:23.068649  166167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:41:23.087297  166167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:41:23.113639  166167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 11:41:23.113743  166167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:41:23.129627  166167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:41:23.129680  166167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:41:23.145172  166167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:41:23.161436  166167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:41:23.176166  166167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:41:23.194247  166167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:41:23.207242  166167 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:41:23.207312  166167 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:41:23.224288  166167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:41:23.236369  166167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:41:23.397432  166167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:41:23.509501  166167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:41:23.509578  166167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:41:23.516078  166167 start.go:563] Will wait 60s for crictl version
	I0210 11:41:23.516149  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:23.521035  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:41:23.586475  166167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:41:23.586560  166167 ssh_runner.go:195] Run: crio --version
	I0210 11:41:23.618745  166167 ssh_runner.go:195] Run: crio --version
	I0210 11:41:23.650936  166167 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 11:41:23.652243  166167 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:41:23.654809  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:23.655257  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:41:10 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:41:23.655287  166167 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:41:23.655550  166167 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0210 11:41:23.660372  166167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:41:23.674906  166167 kubeadm.go:883] updating cluster {Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:41:23.675073  166167 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 11:41:23.675139  166167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:41:23.715504  166167 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 11:41:23.715604  166167 ssh_runner.go:195] Run: which lz4
	I0210 11:41:23.720728  166167 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:41:23.726826  166167 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:41:23.726861  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 11:41:25.428563  166167 crio.go:462] duration metric: took 1.707878482s to copy over tarball
	I0210 11:41:25.428629  166167 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:41:28.580685  166167 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.15202319s)
	I0210 11:41:28.580720  166167 crio.go:469] duration metric: took 3.152126881s to extract the tarball
	I0210 11:41:28.580735  166167 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:41:28.641361  166167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:41:28.689084  166167 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 11:41:28.689109  166167 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 11:41:28.689174  166167 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:41:28.689218  166167 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:41:28.689285  166167 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:41:28.689464  166167 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 11:41:28.689483  166167 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:41:28.689550  166167 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:41:28.689553  166167 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 11:41:28.689469  166167 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:41:28.690931  166167 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:41:28.690941  166167 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:41:28.691000  166167 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:41:28.691246  166167 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:41:28.691345  166167 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 11:41:28.691379  166167 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:41:28.691353  166167 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 11:41:28.691515  166167 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:41:28.900965  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 11:41:28.916463  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 11:41:28.918643  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:41:28.919662  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 11:41:28.921643  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:41:28.929912  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:41:28.981699  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:41:29.009280  166167 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 11:41:29.009348  166167 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 11:41:29.009413  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:29.057052  166167 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 11:41:29.057102  166167 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:41:29.057143  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:29.117117  166167 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 11:41:29.117150  166167 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 11:41:29.117162  166167 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:41:29.117182  166167 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:41:29.117207  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:29.117240  166167 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 11:41:29.117254  166167 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 11:41:29.117212  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:29.117272  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:29.117330  166167 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 11:41:29.117353  166167 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:41:29.117361  166167 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 11:41:29.117379  166167 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:41:29.117383  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:29.117410  166167 ssh_runner.go:195] Run: which crictl
	I0210 11:41:29.117458  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:41:29.117485  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:41:29.192454  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:41:29.192490  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:41:29.192508  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:41:29.192539  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:41:29.192550  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:41:29.192556  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:41:29.192594  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:41:29.356234  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:41:29.356261  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:41:29.356351  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:41:29.356376  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:41:29.356439  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:41:29.356481  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:41:29.356499  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:41:29.527216  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:41:29.527317  166167 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 11:41:29.527393  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:41:29.527466  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:41:29.527541  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:41:29.527601  166167 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 11:41:29.527657  166167 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:41:29.661356  166167 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 11:41:29.661404  166167 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 11:41:29.661475  166167 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 11:41:29.661540  166167 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 11:41:29.661585  166167 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 11:41:29.841628  166167 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:41:29.996911  166167 cache_images.go:92] duration metric: took 1.307781905s to LoadCachedImages
	W0210 11:41:29.997022  166167 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0210 11:41:29.997041  166167 kubeadm.go:934] updating node { 192.168.61.244 8443 v1.20.0 crio true true} ...
	I0210 11:41:29.997174  166167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510006 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:41:29.997296  166167 ssh_runner.go:195] Run: crio config
	I0210 11:41:30.068175  166167 cni.go:84] Creating CNI manager for ""
	I0210 11:41:30.068201  166167 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:41:30.068216  166167 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:41:30.068241  166167 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510006 NodeName:old-k8s-version-510006 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 11:41:30.068416  166167 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510006"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:41:30.068488  166167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 11:41:30.081966  166167 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:41:30.082034  166167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:41:30.093997  166167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0210 11:41:30.114238  166167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:41:30.131615  166167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0210 11:41:30.152786  166167 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0210 11:41:30.157198  166167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:41:30.170748  166167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:41:30.308720  166167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:41:30.331874  166167 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006 for IP: 192.168.61.244
	I0210 11:41:30.331898  166167 certs.go:194] generating shared ca certs ...
	I0210 11:41:30.331918  166167 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:41:30.332105  166167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:41:30.332150  166167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:41:30.332170  166167 certs.go:256] generating profile certs ...
	I0210 11:41:30.332245  166167 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/client.key
	I0210 11:41:30.332263  166167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/client.crt with IP's: []
	I0210 11:41:30.668089  166167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/client.crt ...
	I0210 11:41:30.668128  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/client.crt: {Name:mk8347aea9ae155ce801a2e71d499b500b341d73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:41:30.668316  166167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/client.key ...
	I0210 11:41:30.668336  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/client.key: {Name:mk1d5190f81a74442c1ea6589a6d48f9e0770235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:41:30.668453  166167 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key.25437697
	I0210 11:41:30.668476  166167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.crt.25437697 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.244]
	I0210 11:41:30.717183  166167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.crt.25437697 ...
	I0210 11:41:30.717215  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.crt.25437697: {Name:mk989c3c29aa495f78e6ed948a34f98da35fb993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:41:30.717382  166167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key.25437697 ...
	I0210 11:41:30.717401  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key.25437697: {Name:mk435213c3dae38f2938dc1ecba07a2e3248b984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:41:30.717493  166167 certs.go:381] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.crt.25437697 -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.crt
	I0210 11:41:30.717583  166167 certs.go:385] copying /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key.25437697 -> /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key
	I0210 11:41:30.717665  166167 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.key
	I0210 11:41:30.717686  166167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.crt with IP's: []
	I0210 11:41:30.783571  166167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.crt ...
	I0210 11:41:30.783605  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.crt: {Name:mk6a0fecfe025c211fba745a1d973eef00d23493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:41:30.783794  166167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.key ...
	I0210 11:41:30.783820  166167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.key: {Name:mk7c96a88f077e6b78193c9d152747aff367736b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:41:30.784039  166167 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:41:30.784082  166167 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:41:30.784096  166167 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:41:30.784128  166167 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:41:30.784162  166167 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:41:30.784197  166167 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:41:30.784252  166167 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:41:30.784918  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:41:30.814479  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:41:30.841908  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:41:30.871132  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:41:30.901058  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 11:41:30.932004  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:41:31.001491  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:41:31.033436  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:41:31.060567  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:41:31.087940  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:41:31.116420  166167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:41:31.146415  166167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:41:31.166908  166167 ssh_runner.go:195] Run: openssl version
	I0210 11:41:31.173401  166167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:41:31.186597  166167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:41:31.191377  166167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:41:31.191422  166167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:41:31.198032  166167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:41:31.215852  166167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:41:31.233082  166167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:41:31.240693  166167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:41:31.240744  166167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:41:31.247547  166167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:41:31.269063  166167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:41:31.283441  166167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:41:31.289206  166167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:41:31.289266  166167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:41:31.301445  166167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:41:31.331158  166167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:41:31.337010  166167 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:41:31.337081  166167 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:41:31.337190  166167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:41:31.337256  166167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:41:31.388680  166167 cri.go:89] found id: ""
	I0210 11:41:31.388759  166167 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:41:31.401955  166167 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:41:31.412189  166167 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:41:31.423829  166167 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:41:31.423852  166167 kubeadm.go:157] found existing configuration files:
	
	I0210 11:41:31.423897  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:41:31.436267  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:41:31.436322  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:41:31.446716  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:41:31.456378  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:41:31.456441  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:41:31.466204  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:41:31.476983  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:41:31.477044  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:41:31.487892  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:41:31.498825  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:41:31.498883  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:41:31.510924  166167 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:41:31.710585  166167 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:41:31.710712  166167 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:41:31.892514  166167 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:41:31.892677  166167 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:41:31.892800  166167 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:41:32.120984  166167 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:41:32.123010  166167 out.go:235]   - Generating certificates and keys ...
	I0210 11:41:32.123127  166167 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:41:32.123264  166167 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:41:32.529256  166167 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 11:41:32.689055  166167 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 11:41:32.862020  166167 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 11:41:32.961919  166167 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 11:41:33.207046  166167 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 11:41:33.207316  166167 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-510006] and IPs [192.168.61.244 127.0.0.1 ::1]
	I0210 11:41:33.395407  166167 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 11:41:33.395687  166167 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-510006] and IPs [192.168.61.244 127.0.0.1 ::1]
	I0210 11:41:33.638853  166167 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 11:41:34.004754  166167 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 11:41:34.403016  166167 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 11:41:34.403315  166167 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:41:34.695300  166167 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:41:35.521785  166167 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:41:35.654167  166167 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:41:35.729522  166167 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:41:35.748263  166167 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:41:35.748894  166167 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:41:35.748960  166167 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:41:35.945886  166167 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:41:35.947592  166167 out.go:235]   - Booting up control plane ...
	I0210 11:41:35.947722  166167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:41:35.959963  166167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:41:35.961536  166167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:41:35.962759  166167 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:41:35.968665  166167 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:42:15.932370  166167 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:42:15.933231  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:42:15.933440  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:42:20.932672  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:42:20.932982  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:42:30.932137  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:42:30.932430  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:42:50.932113  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:42:50.932404  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:43:30.932238  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:43:30.932898  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:43:30.932927  166167 kubeadm.go:310] 
	I0210 11:43:30.933062  166167 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:43:30.933164  166167 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:43:30.933174  166167 kubeadm.go:310] 
	I0210 11:43:30.933224  166167 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:43:30.933309  166167 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:43:30.933579  166167 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:43:30.933605  166167 kubeadm.go:310] 
	I0210 11:43:30.933910  166167 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:43:30.934019  166167 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:43:30.934108  166167 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:43:30.934126  166167 kubeadm.go:310] 
	I0210 11:43:30.934397  166167 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:43:30.934592  166167 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:43:30.934609  166167 kubeadm.go:310] 
	I0210 11:43:30.935009  166167 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:43:30.935272  166167 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:43:30.935560  166167 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:43:30.935868  166167 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:43:30.935910  166167 kubeadm.go:310] 
	I0210 11:43:30.936040  166167 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:43:30.936154  166167 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:43:30.936252  166167 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 11:43:30.936377  166167 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-510006] and IPs [192.168.61.244 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-510006] and IPs [192.168.61.244 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-510006] and IPs [192.168.61.244 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-510006] and IPs [192.168.61.244 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 11:43:30.936427  166167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:43:31.523534  166167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:43:31.536969  166167 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:43:31.545900  166167 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:43:31.545928  166167 kubeadm.go:157] found existing configuration files:
	
	I0210 11:43:31.545985  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:43:31.554442  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:43:31.554505  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:43:31.562900  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:43:31.570891  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:43:31.570938  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:43:31.579238  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:43:31.587138  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:43:31.587201  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:43:31.595628  166167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:43:31.603606  166167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:43:31.603658  166167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:43:31.611841  166167 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:43:31.681510  166167 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:43:31.681624  166167 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:43:31.820815  166167 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:43:31.820994  166167 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:43:31.821146  166167 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:43:32.001479  166167 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:43:32.003845  166167 out.go:235]   - Generating certificates and keys ...
	I0210 11:43:32.003979  166167 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:43:32.004098  166167 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:43:32.004222  166167 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:43:32.004320  166167 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:43:32.004427  166167 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:43:32.004508  166167 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:43:32.004598  166167 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:43:32.004685  166167 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:43:32.004784  166167 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:43:32.004880  166167 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:43:32.004955  166167 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:43:32.005051  166167 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:43:32.065682  166167 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:43:32.186424  166167 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:43:32.384004  166167 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:43:32.497794  166167 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:43:32.513056  166167 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:43:32.513571  166167 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:43:32.513623  166167 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:43:32.663213  166167 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:43:32.665088  166167 out.go:235]   - Booting up control plane ...
	I0210 11:43:32.665197  166167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:43:32.672861  166167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:43:32.674219  166167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:43:32.678811  166167 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:43:32.689421  166167 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:44:12.691392  166167 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:44:12.691482  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:44:12.691773  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:44:17.692446  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:44:17.692644  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:44:27.693325  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:44:27.693611  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:44:47.694710  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:44:47.694958  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:45:27.693634  166167 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:45:27.693885  166167 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:45:27.693920  166167 kubeadm.go:310] 
	I0210 11:45:27.693973  166167 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:45:27.694031  166167 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:45:27.694038  166167 kubeadm.go:310] 
	I0210 11:45:27.694086  166167 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:45:27.694114  166167 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:45:27.694205  166167 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:45:27.694210  166167 kubeadm.go:310] 
	I0210 11:45:27.694339  166167 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:45:27.694384  166167 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:45:27.694421  166167 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:45:27.694428  166167 kubeadm.go:310] 
	I0210 11:45:27.694564  166167 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:45:27.694668  166167 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:45:27.694679  166167 kubeadm.go:310] 
	I0210 11:45:27.694823  166167 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:45:27.694939  166167 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:45:27.695037  166167 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:45:27.695139  166167 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:45:27.695146  166167 kubeadm.go:310] 
	I0210 11:45:27.696322  166167 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:45:27.696436  166167 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:45:27.696521  166167 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:45:27.696595  166167 kubeadm.go:394] duration metric: took 3m56.359524974s to StartCluster
	I0210 11:45:27.696649  166167 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:45:27.696723  166167 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:45:27.742235  166167 cri.go:89] found id: ""
	I0210 11:45:27.742266  166167 logs.go:282] 0 containers: []
	W0210 11:45:27.742278  166167 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:45:27.742286  166167 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:45:27.742361  166167 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:45:27.777880  166167 cri.go:89] found id: ""
	I0210 11:45:27.777907  166167 logs.go:282] 0 containers: []
	W0210 11:45:27.777917  166167 logs.go:284] No container was found matching "etcd"
	I0210 11:45:27.777924  166167 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:45:27.777983  166167 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:45:27.811664  166167 cri.go:89] found id: ""
	I0210 11:45:27.811699  166167 logs.go:282] 0 containers: []
	W0210 11:45:27.811710  166167 logs.go:284] No container was found matching "coredns"
	I0210 11:45:27.811719  166167 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:45:27.811793  166167 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:45:27.845755  166167 cri.go:89] found id: ""
	I0210 11:45:27.845785  166167 logs.go:282] 0 containers: []
	W0210 11:45:27.845796  166167 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:45:27.845805  166167 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:45:27.845871  166167 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:45:27.880368  166167 cri.go:89] found id: ""
	I0210 11:45:27.880398  166167 logs.go:282] 0 containers: []
	W0210 11:45:27.880408  166167 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:45:27.880416  166167 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:45:27.880480  166167 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:45:27.914577  166167 cri.go:89] found id: ""
	I0210 11:45:27.914610  166167 logs.go:282] 0 containers: []
	W0210 11:45:27.914619  166167 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:45:27.914626  166167 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:45:27.914688  166167 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:45:27.950823  166167 cri.go:89] found id: ""
	I0210 11:45:27.950850  166167 logs.go:282] 0 containers: []
	W0210 11:45:27.950860  166167 logs.go:284] No container was found matching "kindnet"
	I0210 11:45:27.950874  166167 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:45:27.950892  166167 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:45:28.080317  166167 logs.go:123] Gathering logs for container status ...
	I0210 11:45:28.080353  166167 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:45:28.129199  166167 logs.go:123] Gathering logs for kubelet ...
	I0210 11:45:28.129230  166167 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:45:28.204138  166167 logs.go:123] Gathering logs for dmesg ...
	I0210 11:45:28.204187  166167 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:45:28.220293  166167 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:45:28.220335  166167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:45:28.410579  166167 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0210 11:45:28.410614  166167 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 11:45:28.410668  166167 out.go:270] * 
	* 
	W0210 11:45:28.410734  166167 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:45:28.410749  166167 out.go:270] * 
	* 
	W0210 11:45:28.411667  166167 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:45:28.415260  166167 out.go:201] 
	W0210 11:45:28.417191  166167 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:45:28.417257  166167 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 11:45:28.417284  166167 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 11:45:28.418762  166167 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-510006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 6 (375.139677ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 11:45:28.851163  172131 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-510006" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510006" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (295.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-510006 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-510006 create -f testdata/busybox.yaml: exit status 1 (85.096958ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-510006" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-510006 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 6 (303.139392ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 11:45:29.245423  172167 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-510006" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510006" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 6 (277.910877ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 11:45:29.530055  172197 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-510006" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510006" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-510006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0210 11:45:38.560481  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:38.804457  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:45.540857  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:53.022819  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:57.282411  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.064267  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.070685  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.082037  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.103499  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.144993  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.226460  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.387923  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:02.709719  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:03.351692  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:04.633330  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:07.195338  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:12.316889  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.138821  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.145211  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.156574  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.177945  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.219893  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.301438  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.463222  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:15.784949  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:16.277397  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:16.426970  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:17.708581  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:20.269872  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:22.558448  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:25.391160  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:26.502638  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:35.633156  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:43.040035  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:46:56.114556  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:47:00.482472  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-510006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.335192555s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-510006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-510006 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-510006 describe deploy/metrics-server -n kube-system: exit status 1 (46.33787ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-510006" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-510006 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 6 (234.823652ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 11:47:04.146550  172654 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-510006" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-510006" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (512.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-510006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0210 11:47:16.104980  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:47:19.204397  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:47:24.001756  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:47:37.076487  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:47:48.424200  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:47:54.943034  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:48:22.646454  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:48:32.415807  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:48:45.923124  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:48:58.998157  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:49:00.119440  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:49:06.277038  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:49:16.621102  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:49:35.344246  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:49:44.324173  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-510006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m31.130630892s)

                                                
                                                
-- stdout --
	* [old-k8s-version-510006] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-510006" primary control-plane node in "old-k8s-version-510006" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-510006" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:47:07.699297  172785 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:47:07.699455  172785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:47:07.699466  172785 out.go:358] Setting ErrFile to fd 2...
	I0210 11:47:07.699475  172785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:47:07.699743  172785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:47:07.700500  172785 out.go:352] Setting JSON to false
	I0210 11:47:07.701781  172785 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8970,"bootTime":1739179058,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:47:07.701894  172785 start.go:139] virtualization: kvm guest
	I0210 11:47:07.703854  172785 out.go:177] * [old-k8s-version-510006] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:47:07.705329  172785 notify.go:220] Checking for updates...
	I0210 11:47:07.705348  172785 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:47:07.706450  172785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:47:07.707651  172785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:47:07.708853  172785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:47:07.710073  172785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:47:07.711272  172785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:47:07.712807  172785 config.go:182] Loaded profile config "old-k8s-version-510006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 11:47:07.713262  172785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:47:07.713343  172785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:47:07.729441  172785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0210 11:47:07.729921  172785 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:47:07.730481  172785 main.go:141] libmachine: Using API Version  1
	I0210 11:47:07.730503  172785 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:47:07.730880  172785 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:47:07.731071  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:07.732717  172785 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 11:47:07.734038  172785 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:47:07.734364  172785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:47:07.734410  172785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:47:07.749468  172785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39499
	I0210 11:47:07.749968  172785 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:47:07.750435  172785 main.go:141] libmachine: Using API Version  1
	I0210 11:47:07.750463  172785 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:47:07.750768  172785 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:47:07.750974  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:07.790937  172785 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 11:47:07.792099  172785 start.go:297] selected driver: kvm2
	I0210 11:47:07.792126  172785 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-5
10006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:47:07.792273  172785 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:47:07.793239  172785 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:47:07.793316  172785 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:47:07.809444  172785 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:47:07.809951  172785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:47:07.809998  172785 cni.go:84] Creating CNI manager for ""
	I0210 11:47:07.810040  172785 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:47:07.810087  172785 start.go:340] cluster config:
	{Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:47:07.810244  172785 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:47:07.812123  172785 out.go:177] * Starting "old-k8s-version-510006" primary control-plane node in "old-k8s-version-510006" cluster
	I0210 11:47:07.813193  172785 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 11:47:07.813233  172785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 11:47:07.813246  172785 cache.go:56] Caching tarball of preloaded images
	I0210 11:47:07.813332  172785 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 11:47:07.813366  172785 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 11:47:07.813465  172785 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/config.json ...
	I0210 11:47:07.813674  172785 start.go:360] acquireMachinesLock for old-k8s-version-510006: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:47:07.813737  172785 start.go:364] duration metric: took 40.608µs to acquireMachinesLock for "old-k8s-version-510006"
	I0210 11:47:07.813758  172785 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:47:07.813770  172785 fix.go:54] fixHost starting: 
	I0210 11:47:07.814036  172785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:47:07.814076  172785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:47:07.829741  172785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I0210 11:47:07.830146  172785 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:47:07.830657  172785 main.go:141] libmachine: Using API Version  1
	I0210 11:47:07.830679  172785 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:47:07.831040  172785 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:47:07.831306  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:07.831480  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetState
	I0210 11:47:07.833392  172785 fix.go:112] recreateIfNeeded on old-k8s-version-510006: state=Stopped err=<nil>
	I0210 11:47:07.833423  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	W0210 11:47:07.833580  172785 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:47:07.836747  172785 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-510006" ...
	I0210 11:47:07.837983  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .Start
	I0210 11:47:07.838230  172785 main.go:141] libmachine: (old-k8s-version-510006) starting domain...
	I0210 11:47:07.838257  172785 main.go:141] libmachine: (old-k8s-version-510006) ensuring networks are active...
	I0210 11:47:07.839149  172785 main.go:141] libmachine: (old-k8s-version-510006) Ensuring network default is active
	I0210 11:47:07.839432  172785 main.go:141] libmachine: (old-k8s-version-510006) Ensuring network mk-old-k8s-version-510006 is active
	I0210 11:47:07.839756  172785 main.go:141] libmachine: (old-k8s-version-510006) getting domain XML...
	I0210 11:47:07.840465  172785 main.go:141] libmachine: (old-k8s-version-510006) creating domain...
	I0210 11:47:09.113332  172785 main.go:141] libmachine: (old-k8s-version-510006) waiting for IP...
	I0210 11:47:09.114266  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:09.114722  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:09.114809  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:09.114702  172820 retry.go:31] will retry after 243.183767ms: waiting for domain to come up
	I0210 11:47:09.359511  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:09.360171  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:09.360201  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:09.360113  172820 retry.go:31] will retry after 286.490459ms: waiting for domain to come up
	I0210 11:47:09.648722  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:09.649301  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:09.649329  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:09.649258  172820 retry.go:31] will retry after 342.584828ms: waiting for domain to come up
	I0210 11:47:09.993802  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:09.994326  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:09.994364  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:09.994280  172820 retry.go:31] will retry after 477.531539ms: waiting for domain to come up
	I0210 11:47:10.473692  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:10.474204  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:10.474235  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:10.474175  172820 retry.go:31] will retry after 605.503431ms: waiting for domain to come up
	I0210 11:47:11.081490  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:11.081922  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:11.081949  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:11.081884  172820 retry.go:31] will retry after 780.339331ms: waiting for domain to come up
	I0210 11:47:11.863749  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:11.864365  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:11.864397  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:11.864326  172820 retry.go:31] will retry after 1.052876704s: waiting for domain to come up
	I0210 11:47:12.919274  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:12.919758  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:12.919791  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:12.919715  172820 retry.go:31] will retry after 1.073991555s: waiting for domain to come up
	I0210 11:47:13.995539  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:13.996097  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:13.996138  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:13.996075  172820 retry.go:31] will retry after 1.141204998s: waiting for domain to come up
	I0210 11:47:15.139282  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:15.139840  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:15.139864  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:15.139825  172820 retry.go:31] will retry after 1.677845305s: waiting for domain to come up
	I0210 11:47:16.819558  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:16.820175  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:16.820202  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:16.820133  172820 retry.go:31] will retry after 2.513404587s: waiting for domain to come up
	I0210 11:47:19.336044  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:19.336652  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:19.336702  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:19.336615  172820 retry.go:31] will retry after 2.997801271s: waiting for domain to come up
	I0210 11:47:22.335619  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:22.336160  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | unable to find current IP address of domain old-k8s-version-510006 in network mk-old-k8s-version-510006
	I0210 11:47:22.336218  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | I0210 11:47:22.336125  172820 retry.go:31] will retry after 4.178207302s: waiting for domain to come up
	I0210 11:47:26.516554  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.517080  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has current primary IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.517104  172785 main.go:141] libmachine: (old-k8s-version-510006) found domain IP: 192.168.61.244
	I0210 11:47:26.517113  172785 main.go:141] libmachine: (old-k8s-version-510006) reserving static IP address...
	I0210 11:47:26.517569  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "old-k8s-version-510006", mac: "52:54:00:57:cc:39", ip: "192.168.61.244"} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:26.517607  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | skip adding static IP to network mk-old-k8s-version-510006 - found existing host DHCP lease matching {name: "old-k8s-version-510006", mac: "52:54:00:57:cc:39", ip: "192.168.61.244"}
	I0210 11:47:26.517626  172785 main.go:141] libmachine: (old-k8s-version-510006) reserved static IP address 192.168.61.244 for domain old-k8s-version-510006
	I0210 11:47:26.517642  172785 main.go:141] libmachine: (old-k8s-version-510006) waiting for SSH...
	I0210 11:47:26.517658  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | Getting to WaitForSSH function...
	I0210 11:47:26.520427  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.520767  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:26.520799  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.520977  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | Using SSH client type: external
	I0210 11:47:26.521012  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa (-rw-------)
	I0210 11:47:26.521041  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:47:26.521055  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | About to run SSH command:
	I0210 11:47:26.521072  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | exit 0
	I0210 11:47:26.646852  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | SSH cmd err, output: <nil>: 
	I0210 11:47:26.647314  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetConfigRaw
	I0210 11:47:26.647956  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:47:26.650361  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.650656  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:26.650686  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.650884  172785 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/config.json ...
	I0210 11:47:26.651119  172785 machine.go:93] provisionDockerMachine start ...
	I0210 11:47:26.651145  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:26.651380  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:26.653368  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.653728  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:26.653756  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.653921  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:26.654074  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:26.654235  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:26.654371  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:26.654584  172785 main.go:141] libmachine: Using SSH client type: native
	I0210 11:47:26.654790  172785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:47:26.654803  172785 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:47:26.763690  172785 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:47:26.763771  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:47:26.764052  172785 buildroot.go:166] provisioning hostname "old-k8s-version-510006"
	I0210 11:47:26.764084  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:47:26.764270  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:26.767179  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.767600  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:26.767633  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.767811  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:26.767987  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:26.768145  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:26.768306  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:26.768476  172785 main.go:141] libmachine: Using SSH client type: native
	I0210 11:47:26.768637  172785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:47:26.768649  172785 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-510006 && echo "old-k8s-version-510006" | sudo tee /etc/hostname
	I0210 11:47:26.894253  172785 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-510006
	
	I0210 11:47:26.894283  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:26.897241  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.897655  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:26.897685  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:26.897867  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:26.898114  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:26.898314  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:26.898485  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:26.898656  172785 main.go:141] libmachine: Using SSH client type: native
	I0210 11:47:26.898868  172785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:47:26.898893  172785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-510006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-510006/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-510006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:47:27.015336  172785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:47:27.015370  172785 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:47:27.015416  172785 buildroot.go:174] setting up certificates
	I0210 11:47:27.015428  172785 provision.go:84] configureAuth start
	I0210 11:47:27.015439  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetMachineName
	I0210 11:47:27.015807  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:47:27.018881  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.019308  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:27.019335  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.019513  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:27.022019  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.022392  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:27.022426  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.022634  172785 provision.go:143] copyHostCerts
	I0210 11:47:27.022717  172785 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:47:27.022736  172785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:47:27.022820  172785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:47:27.022936  172785 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:47:27.022949  172785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:47:27.022993  172785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:47:27.023134  172785 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:47:27.023148  172785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:47:27.023179  172785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:47:27.023275  172785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-510006 san=[127.0.0.1 192.168.61.244 localhost minikube old-k8s-version-510006]
	I0210 11:47:27.538711  172785 provision.go:177] copyRemoteCerts
	I0210 11:47:27.538767  172785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:47:27.538791  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:27.541478  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.541764  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:27.541808  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.541952  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:27.542145  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:27.542316  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:27.542452  172785 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:47:27.626346  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:47:27.649047  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 11:47:27.670199  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:47:27.693239  172785 provision.go:87] duration metric: took 677.798064ms to configureAuth
	I0210 11:47:27.693276  172785 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:47:27.693520  172785 config.go:182] Loaded profile config "old-k8s-version-510006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 11:47:27.693632  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:27.696724  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.697317  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:27.697353  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.697549  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:27.697770  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:27.697971  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:27.698110  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:27.698295  172785 main.go:141] libmachine: Using SSH client type: native
	I0210 11:47:27.698461  172785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:47:27.698475  172785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:47:27.921884  172785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:47:27.921922  172785 machine.go:96] duration metric: took 1.270787847s to provisionDockerMachine
	I0210 11:47:27.921937  172785 start.go:293] postStartSetup for "old-k8s-version-510006" (driver="kvm2")
	I0210 11:47:27.921951  172785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:47:27.921975  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:27.922395  172785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:47:27.922439  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:27.925400  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.925818  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:27.925849  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:27.926015  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:27.926211  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:27.926406  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:27.926568  172785 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:47:28.009845  172785 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:47:28.013950  172785 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:47:28.013978  172785 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:47:28.014060  172785 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:47:28.014155  172785 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:47:28.014246  172785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:47:28.024935  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:47:28.050035  172785 start.go:296] duration metric: took 128.080711ms for postStartSetup
	I0210 11:47:28.050084  172785 fix.go:56] duration metric: took 20.236315827s for fixHost
	I0210 11:47:28.050110  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:28.052976  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.053443  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:28.053476  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.053692  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:28.053882  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:28.054084  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:28.054295  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:28.054493  172785 main.go:141] libmachine: Using SSH client type: native
	I0210 11:47:28.054692  172785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.244 22 <nil> <nil>}
	I0210 11:47:28.054703  172785 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:47:28.167696  172785 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739188048.143007376
	
	I0210 11:47:28.167727  172785 fix.go:216] guest clock: 1739188048.143007376
	I0210 11:47:28.167738  172785 fix.go:229] Guest: 2025-02-10 11:47:28.143007376 +0000 UTC Remote: 2025-02-10 11:47:28.05008919 +0000 UTC m=+20.391976324 (delta=92.918186ms)
	I0210 11:47:28.167762  172785 fix.go:200] guest clock delta is within tolerance: 92.918186ms
	I0210 11:47:28.167767  172785 start.go:83] releasing machines lock for "old-k8s-version-510006", held for 20.354018364s
	I0210 11:47:28.167794  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:28.168087  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:47:28.170932  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.171344  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:28.171382  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.171531  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:28.172031  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:28.172249  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .DriverName
	I0210 11:47:28.172377  172785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:47:28.172422  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:28.172539  172785 ssh_runner.go:195] Run: cat /version.json
	I0210 11:47:28.172568  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHHostname
	I0210 11:47:28.175446  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.175472  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.175799  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:28.175834  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.175860  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:28.175880  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:28.175961  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:28.176157  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:28.176232  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHPort
	I0210 11:47:28.176345  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:28.176434  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHKeyPath
	I0210 11:47:28.176520  172785 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:47:28.176609  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetSSHUsername
	I0210 11:47:28.176778  172785 sshutil.go:53] new ssh client: &{IP:192.168.61.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/old-k8s-version-510006/id_rsa Username:docker}
	I0210 11:47:28.287572  172785 ssh_runner.go:195] Run: systemctl --version
	I0210 11:47:28.293622  172785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:47:28.436577  172785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:47:28.442080  172785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:47:28.442150  172785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:47:28.457777  172785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:47:28.457802  172785 start.go:495] detecting cgroup driver to use...
	I0210 11:47:28.457857  172785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:47:28.474417  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:47:28.488019  172785 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:47:28.488083  172785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:47:28.500879  172785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:47:28.513935  172785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:47:28.638514  172785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:47:28.768960  172785 docker.go:233] disabling docker service ...
	I0210 11:47:28.769052  172785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:47:28.783407  172785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:47:28.797874  172785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:47:28.926781  172785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:47:29.048172  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:47:29.061263  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:47:29.079378  172785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 11:47:29.079441  172785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:47:29.089801  172785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:47:29.089875  172785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:47:29.100085  172785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:47:29.110463  172785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:47:29.121065  172785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:47:29.131543  172785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:47:29.141160  172785 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:47:29.141224  172785 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:47:29.156236  172785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:47:29.169391  172785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:47:29.306878  172785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:47:29.399072  172785 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:47:29.399148  172785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:47:29.403535  172785 start.go:563] Will wait 60s for crictl version
	I0210 11:47:29.403588  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:29.406940  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:47:29.446287  172785 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:47:29.446361  172785 ssh_runner.go:195] Run: crio --version
	I0210 11:47:29.476791  172785 ssh_runner.go:195] Run: crio --version
	I0210 11:47:29.505852  172785 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 11:47:29.507024  172785 main.go:141] libmachine: (old-k8s-version-510006) Calling .GetIP
	I0210 11:47:29.509820  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:29.510108  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:cc:39", ip: ""} in network mk-old-k8s-version-510006: {Iface:virbr3 ExpiryTime:2025-02-10 12:47:19 +0000 UTC Type:0 Mac:52:54:00:57:cc:39 Iaid: IPaddr:192.168.61.244 Prefix:24 Hostname:old-k8s-version-510006 Clientid:01:52:54:00:57:cc:39}
	I0210 11:47:29.510132  172785 main.go:141] libmachine: (old-k8s-version-510006) DBG | domain old-k8s-version-510006 has defined IP address 192.168.61.244 and MAC address 52:54:00:57:cc:39 in network mk-old-k8s-version-510006
	I0210 11:47:29.510354  172785 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0210 11:47:29.514109  172785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:47:29.526474  172785 kubeadm.go:883] updating cluster {Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:47:29.526578  172785 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 11:47:29.526629  172785 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:47:29.571230  172785 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 11:47:29.571315  172785 ssh_runner.go:195] Run: which lz4
	I0210 11:47:29.575396  172785 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:47:29.579406  172785 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:47:29.579439  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 11:47:30.955212  172785 crio.go:462] duration metric: took 1.379829798s to copy over tarball
	I0210 11:47:30.955284  172785 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:47:33.777863  172785 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.822544662s)
	I0210 11:47:33.777901  172785 crio.go:469] duration metric: took 2.822656823s to extract the tarball
	I0210 11:47:33.777913  172785 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:47:33.819614  172785 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:47:33.856472  172785 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 11:47:33.856508  172785 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 11:47:33.856598  172785 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:47:33.856609  172785 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:47:33.856622  172785 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:47:33.856624  172785 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:47:33.856663  172785 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 11:47:33.856645  172785 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:47:33.856695  172785 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 11:47:33.856601  172785 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:47:33.858168  172785 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:47:33.858186  172785 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 11:47:33.858186  172785 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 11:47:33.858173  172785 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:47:33.858168  172785 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:47:33.858237  172785 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:47:33.858289  172785 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:47:33.858243  172785 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:47:34.072201  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 11:47:34.075204  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:47:34.098577  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 11:47:34.106329  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:47:34.126243  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 11:47:34.134696  172785 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 11:47:34.134747  172785 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 11:47:34.134836  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:34.136810  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:47:34.136928  172785 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 11:47:34.136973  172785 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:47:34.137015  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:34.171973  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:47:34.231409  172785 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 11:47:34.231462  172785 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:47:34.231468  172785 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 11:47:34.231470  172785 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 11:47:34.231487  172785 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 11:47:34.231507  172785 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 11:47:34.231513  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:34.231540  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:47:34.231547  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:34.231510  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:34.256989  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:47:34.257234  172785 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 11:47:34.257282  172785 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:47:34.257327  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:34.269609  172785 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 11:47:34.269652  172785 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:47:34.269695  172785 ssh_runner.go:195] Run: which crictl
	I0210 11:47:34.269725  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:47:34.324464  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:47:34.324539  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:47:34.324599  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:47:34.329317  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:47:34.329378  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:47:34.329440  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:47:34.329496  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:47:34.471476  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:47:34.471529  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:47:34.471551  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 11:47:34.471612  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:47:34.471679  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 11:47:34.471710  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:47:34.471769  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 11:47:34.619653  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 11:47:34.619684  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 11:47:34.638920  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 11:47:34.638984  172785 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 11:47:34.639059  172785 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 11:47:34.639073  172785 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 11:47:34.639200  172785 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 11:47:34.691123  172785 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 11:47:34.691148  172785 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 11:47:34.720518  172785 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 11:47:34.720647  172785 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 11:47:34.983301  172785 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:47:35.118100  172785 cache_images.go:92] duration metric: took 1.26157157s to LoadCachedImages
	W0210 11:47:35.118204  172785 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20385-109271/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0210 11:47:35.118217  172785 kubeadm.go:934] updating node { 192.168.61.244 8443 v1.20.0 crio true true} ...
	I0210 11:47:35.118355  172785 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-510006 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:47:35.118448  172785 ssh_runner.go:195] Run: crio config
	I0210 11:47:35.166636  172785 cni.go:84] Creating CNI manager for ""
	I0210 11:47:35.166668  172785 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:47:35.166683  172785 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:47:35.166704  172785 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.244 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-510006 NodeName:old-k8s-version-510006 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 11:47:35.166827  172785 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-510006"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:47:35.166887  172785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 11:47:35.177790  172785 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:47:35.177850  172785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:47:35.187689  172785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0210 11:47:35.203997  172785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:47:35.219811  172785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0210 11:47:35.235345  172785 ssh_runner.go:195] Run: grep 192.168.61.244	control-plane.minikube.internal$ /etc/hosts
	I0210 11:47:35.238806  172785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:47:35.250720  172785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:47:35.372025  172785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:47:35.389314  172785 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006 for IP: 192.168.61.244
	I0210 11:47:35.389339  172785 certs.go:194] generating shared ca certs ...
	I0210 11:47:35.389367  172785 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:47:35.389525  172785 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:47:35.389578  172785 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:47:35.389594  172785 certs.go:256] generating profile certs ...
	I0210 11:47:35.389704  172785 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/client.key
	I0210 11:47:35.389787  172785 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key.25437697
	I0210 11:47:35.389847  172785 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.key
	I0210 11:47:35.389994  172785 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:47:35.390042  172785 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:47:35.390055  172785 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:47:35.390076  172785 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:47:35.390108  172785 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:47:35.390149  172785 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:47:35.390208  172785 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:47:35.390965  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:47:35.424585  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:47:35.462149  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:47:35.491104  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:47:35.526300  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 11:47:35.571483  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:47:35.611308  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:47:35.640983  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/old-k8s-version-510006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:47:35.668552  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:47:35.692440  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:47:35.716701  172785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:47:35.746512  172785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:47:35.763761  172785 ssh_runner.go:195] Run: openssl version
	I0210 11:47:35.769404  172785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:47:35.782469  172785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:47:35.787038  172785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:47:35.787108  172785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:47:35.792851  172785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:47:35.803604  172785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:47:35.814677  172785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:47:35.819162  172785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:47:35.819258  172785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:47:35.824785  172785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:47:35.835003  172785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:47:35.846029  172785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:47:35.850323  172785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:47:35.850380  172785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:47:35.855916  172785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:47:35.866330  172785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:47:35.871334  172785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 11:47:35.876995  172785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 11:47:35.882596  172785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 11:47:35.888632  172785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 11:47:35.893981  172785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 11:47:35.899567  172785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 11:47:35.905104  172785 kubeadm.go:392] StartCluster: {Name:old-k8s-version-510006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-510006 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.244 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:47:35.905191  172785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:47:35.905236  172785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:47:35.941693  172785 cri.go:89] found id: ""
	I0210 11:47:35.941777  172785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:47:35.951686  172785 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 11:47:35.951711  172785 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 11:47:35.951765  172785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 11:47:35.961795  172785 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:47:35.962751  172785 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-510006" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:47:35.963373  172785 kubeconfig.go:62] /home/jenkins/minikube-integration/20385-109271/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-510006" cluster setting kubeconfig missing "old-k8s-version-510006" context setting]
	I0210 11:47:35.964230  172785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:47:36.035882  172785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 11:47:36.047274  172785 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.244
	I0210 11:47:36.047321  172785 kubeadm.go:1160] stopping kube-system containers ...
	I0210 11:47:36.047338  172785 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 11:47:36.047387  172785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:47:36.084664  172785 cri.go:89] found id: ""
	I0210 11:47:36.084739  172785 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 11:47:36.101077  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:47:36.111237  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:47:36.111261  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:47:36.111324  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:47:36.120383  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:47:36.120443  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:47:36.129261  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:47:36.137996  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:47:36.138065  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:47:36.147301  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:47:36.156731  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:47:36.156803  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:47:36.165932  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:47:36.174475  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:47:36.174546  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:47:36.183844  172785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:47:36.193541  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:47:36.407618  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:47:37.463087  172785 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.055423586s)
	I0210 11:47:37.463138  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:47:37.692285  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:47:37.808519  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:47:37.891802  172785 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:47:37.891896  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:38.392253  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:38.892994  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:39.392933  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:39.892788  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:40.392036  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:40.892086  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:41.392973  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:41.892389  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:42.392407  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:42.892999  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:43.392306  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:43.892902  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:44.392396  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:44.892387  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:45.392324  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:45.891976  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:46.392438  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:46.892181  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:47.392056  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:47.892182  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:48.392760  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:48.892015  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:49.391990  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:49.892068  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:50.392228  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:50.892781  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:51.392906  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:51.892060  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:52.392429  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:52.892854  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:53.392424  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:53.892317  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:54.392951  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:54.892612  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:55.392702  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:55.891967  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:56.392637  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:56.892996  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:57.392381  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:57.892842  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:58.392988  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:58.892227  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:59.392453  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:47:59.892035  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:00.392667  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:00.892570  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:01.392626  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:01.892432  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:02.392426  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:02.892481  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:03.392117  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:03.892308  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:04.392108  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:04.892896  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:05.392394  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:05.892430  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:06.392688  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:06.892318  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:07.392427  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:07.892303  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:08.392960  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:08.892605  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:09.392834  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:09.892019  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:10.392759  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:10.892663  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:11.392944  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:11.892477  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:12.392183  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:12.892816  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:13.392410  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:13.892398  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:14.392413  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:14.892188  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:15.392420  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:15.893003  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:16.392868  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:16.892645  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:17.392411  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:17.892574  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:18.392406  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:18.892067  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:19.392341  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:19.892264  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:20.392827  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:20.892484  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:21.392417  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:21.892233  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:22.392623  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:22.892484  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:23.392015  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:23.892453  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:24.392424  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:24.892608  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:25.392039  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:25.892411  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:26.392081  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:26.892634  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:27.392942  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:27.892245  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:28.392017  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:28.892240  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:29.392745  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:29.892405  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:30.392996  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:30.891986  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:31.391993  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:31.892420  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:32.392389  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:32.892927  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:33.392402  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:33.892130  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:34.392930  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:34.892751  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:35.392175  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:35.892866  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:36.392312  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:36.891999  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:37.392371  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:37.892801  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:37.892883  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:37.930531  172785 cri.go:89] found id: ""
	I0210 11:48:37.930563  172785 logs.go:282] 0 containers: []
	W0210 11:48:37.930572  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:37.930578  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:37.930637  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:37.963759  172785 cri.go:89] found id: ""
	I0210 11:48:37.963790  172785 logs.go:282] 0 containers: []
	W0210 11:48:37.963798  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:37.963804  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:37.963861  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:37.998341  172785 cri.go:89] found id: ""
	I0210 11:48:37.998381  172785 logs.go:282] 0 containers: []
	W0210 11:48:37.998394  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:37.998405  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:37.998466  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:38.035546  172785 cri.go:89] found id: ""
	I0210 11:48:38.035574  172785 logs.go:282] 0 containers: []
	W0210 11:48:38.035583  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:38.035592  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:38.035650  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:38.079577  172785 cri.go:89] found id: ""
	I0210 11:48:38.079607  172785 logs.go:282] 0 containers: []
	W0210 11:48:38.079619  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:38.079634  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:38.079716  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:38.116809  172785 cri.go:89] found id: ""
	I0210 11:48:38.116852  172785 logs.go:282] 0 containers: []
	W0210 11:48:38.116874  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:38.116883  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:38.116962  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:38.154223  172785 cri.go:89] found id: ""
	I0210 11:48:38.154255  172785 logs.go:282] 0 containers: []
	W0210 11:48:38.154264  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:38.154271  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:38.154323  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:38.192156  172785 cri.go:89] found id: ""
	I0210 11:48:38.192192  172785 logs.go:282] 0 containers: []
	W0210 11:48:38.192203  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:38.192217  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:38.192231  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:38.205042  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:38.205075  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:38.329167  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:38.329197  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:38.329215  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:38.400556  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:38.400595  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:38.438690  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:38.438728  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:48:40.989979  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:41.002599  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:41.002688  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:41.035798  172785 cri.go:89] found id: ""
	I0210 11:48:41.035827  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.035835  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:41.035842  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:41.035896  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:41.070647  172785 cri.go:89] found id: ""
	I0210 11:48:41.070680  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.070688  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:41.070694  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:41.070745  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:41.105441  172785 cri.go:89] found id: ""
	I0210 11:48:41.105475  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.105487  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:41.105495  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:41.105562  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:41.139118  172785 cri.go:89] found id: ""
	I0210 11:48:41.139149  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.139159  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:41.139167  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:41.139250  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:41.176768  172785 cri.go:89] found id: ""
	I0210 11:48:41.176809  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.176822  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:41.176829  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:41.176986  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:41.232257  172785 cri.go:89] found id: ""
	I0210 11:48:41.232285  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.232295  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:41.232304  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:41.232372  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:41.290028  172785 cri.go:89] found id: ""
	I0210 11:48:41.290060  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.290072  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:41.290078  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:41.290153  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:41.337199  172785 cri.go:89] found id: ""
	I0210 11:48:41.337231  172785 logs.go:282] 0 containers: []
	W0210 11:48:41.337239  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:41.337249  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:41.337260  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:41.384160  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:41.384193  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:48:41.434707  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:41.434748  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:41.448534  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:41.448564  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:41.527929  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:41.527952  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:41.527967  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:44.109558  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:44.121707  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:44.121786  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:44.153832  172785 cri.go:89] found id: ""
	I0210 11:48:44.153862  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.153871  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:44.153878  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:44.153927  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:44.187733  172785 cri.go:89] found id: ""
	I0210 11:48:44.187762  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.187772  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:44.187780  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:44.187843  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:44.221927  172785 cri.go:89] found id: ""
	I0210 11:48:44.221964  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.221973  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:44.221979  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:44.222030  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:44.255168  172785 cri.go:89] found id: ""
	I0210 11:48:44.255215  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.255227  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:44.255236  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:44.255295  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:44.287341  172785 cri.go:89] found id: ""
	I0210 11:48:44.287367  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.287377  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:44.287382  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:44.287436  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:44.319892  172785 cri.go:89] found id: ""
	I0210 11:48:44.319931  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.319940  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:44.319947  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:44.319997  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:44.352298  172785 cri.go:89] found id: ""
	I0210 11:48:44.352335  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.352344  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:44.352350  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:44.352411  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:44.388796  172785 cri.go:89] found id: ""
	I0210 11:48:44.388824  172785 logs.go:282] 0 containers: []
	W0210 11:48:44.388831  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:44.388840  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:44.388852  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:48:44.437360  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:44.437401  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:44.450165  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:44.450197  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:44.521379  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:44.521412  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:44.521428  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:44.599824  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:44.599866  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:47.137581  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:47.150234  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:47.150294  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:47.185934  172785 cri.go:89] found id: ""
	I0210 11:48:47.185973  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.185986  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:47.185995  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:47.186070  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:47.220842  172785 cri.go:89] found id: ""
	I0210 11:48:47.220870  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.220877  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:47.220882  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:47.220933  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:47.255931  172785 cri.go:89] found id: ""
	I0210 11:48:47.255964  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.255983  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:47.255989  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:47.256053  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:47.287531  172785 cri.go:89] found id: ""
	I0210 11:48:47.287560  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.287570  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:47.287578  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:47.287646  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:47.318413  172785 cri.go:89] found id: ""
	I0210 11:48:47.318448  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.318460  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:47.318468  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:47.318526  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:47.350300  172785 cri.go:89] found id: ""
	I0210 11:48:47.350338  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.350347  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:47.350353  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:47.350411  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:47.381941  172785 cri.go:89] found id: ""
	I0210 11:48:47.381972  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.381980  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:47.381986  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:47.382053  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:47.413231  172785 cri.go:89] found id: ""
	I0210 11:48:47.413264  172785 logs.go:282] 0 containers: []
	W0210 11:48:47.413273  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:47.413283  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:47.413294  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:47.477237  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:47.477266  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:47.477280  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:47.554442  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:47.554476  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:47.591594  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:47.591626  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:48:47.643623  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:47.643654  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:50.158864  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:50.174887  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:50.174996  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:50.209847  172785 cri.go:89] found id: ""
	I0210 11:48:50.209879  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.209891  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:50.209899  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:50.209961  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:50.245309  172785 cri.go:89] found id: ""
	I0210 11:48:50.245342  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.245352  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:50.245400  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:50.245473  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:50.274921  172785 cri.go:89] found id: ""
	I0210 11:48:50.274954  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.274967  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:50.274975  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:50.275040  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:50.304762  172785 cri.go:89] found id: ""
	I0210 11:48:50.304791  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.304799  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:50.304805  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:50.304857  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:50.335913  172785 cri.go:89] found id: ""
	I0210 11:48:50.335948  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.335959  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:50.335967  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:50.336028  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:50.366873  172785 cri.go:89] found id: ""
	I0210 11:48:50.366909  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.366921  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:50.366929  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:50.366994  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:50.402070  172785 cri.go:89] found id: ""
	I0210 11:48:50.402100  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.402109  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:50.402115  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:50.402166  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:50.433876  172785 cri.go:89] found id: ""
	I0210 11:48:50.433907  172785 logs.go:282] 0 containers: []
	W0210 11:48:50.433918  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:50.433931  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:50.433952  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:50.472660  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:50.472690  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:48:50.523978  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:50.524014  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:50.536548  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:50.536576  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:50.598932  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:50.598965  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:50.598982  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:53.172406  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:53.186463  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:53.186523  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:53.219406  172785 cri.go:89] found id: ""
	I0210 11:48:53.219432  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.219440  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:53.219445  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:53.219501  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:53.250826  172785 cri.go:89] found id: ""
	I0210 11:48:53.250863  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.250876  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:53.250883  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:53.250951  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:53.281624  172785 cri.go:89] found id: ""
	I0210 11:48:53.281656  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.281664  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:53.281672  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:53.281731  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:53.311407  172785 cri.go:89] found id: ""
	I0210 11:48:53.311441  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.311451  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:53.311460  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:53.311521  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:53.342556  172785 cri.go:89] found id: ""
	I0210 11:48:53.342580  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.342588  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:53.342593  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:53.342644  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:53.377653  172785 cri.go:89] found id: ""
	I0210 11:48:53.377682  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.377690  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:53.377697  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:53.377759  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:53.410334  172785 cri.go:89] found id: ""
	I0210 11:48:53.410370  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.410382  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:53.410391  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:53.410454  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:53.442294  172785 cri.go:89] found id: ""
	I0210 11:48:53.442323  172785 logs.go:282] 0 containers: []
	W0210 11:48:53.442335  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:53.442345  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:53.442356  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:48:53.490236  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:53.490270  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:53.502893  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:53.502917  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:53.572727  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:53.572771  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:53.572786  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:53.650370  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:53.650416  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:56.195561  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:56.208308  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:56.208387  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:56.241449  172785 cri.go:89] found id: ""
	I0210 11:48:56.241485  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.241495  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:56.241503  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:56.241567  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:56.275291  172785 cri.go:89] found id: ""
	I0210 11:48:56.275323  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.275331  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:56.275337  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:56.275399  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:56.306721  172785 cri.go:89] found id: ""
	I0210 11:48:56.306752  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.306764  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:56.306772  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:56.306830  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:56.338961  172785 cri.go:89] found id: ""
	I0210 11:48:56.338992  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.339003  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:56.339012  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:56.339074  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:56.372718  172785 cri.go:89] found id: ""
	I0210 11:48:56.372749  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.372757  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:56.372763  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:56.372819  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:56.405196  172785 cri.go:89] found id: ""
	I0210 11:48:56.405238  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.405252  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:56.405261  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:56.405339  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:56.443252  172785 cri.go:89] found id: ""
	I0210 11:48:56.443288  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.443300  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:56.443308  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:56.443374  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:56.475588  172785 cri.go:89] found id: ""
	I0210 11:48:56.475616  172785 logs.go:282] 0 containers: []
	W0210 11:48:56.475623  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:56.475633  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:56.475645  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:56.488146  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:56.488175  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:56.558103  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:56.558134  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:56.558150  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:56.630697  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:56.630735  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:56.666586  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:56.666620  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:48:59.223834  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:48:59.236893  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:48:59.236965  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:48:59.271053  172785 cri.go:89] found id: ""
	I0210 11:48:59.271089  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.271098  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:48:59.271105  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:48:59.271166  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:48:59.302504  172785 cri.go:89] found id: ""
	I0210 11:48:59.302557  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.302568  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:48:59.302575  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:48:59.302639  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:48:59.331919  172785 cri.go:89] found id: ""
	I0210 11:48:59.331951  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.331961  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:48:59.331968  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:48:59.332036  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:48:59.363123  172785 cri.go:89] found id: ""
	I0210 11:48:59.363164  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.363175  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:48:59.363199  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:48:59.363270  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:48:59.399388  172785 cri.go:89] found id: ""
	I0210 11:48:59.399423  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.399432  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:48:59.399438  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:48:59.399503  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:48:59.433762  172785 cri.go:89] found id: ""
	I0210 11:48:59.433796  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.433809  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:48:59.433817  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:48:59.433882  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:48:59.464542  172785 cri.go:89] found id: ""
	I0210 11:48:59.464574  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.464585  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:48:59.464594  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:48:59.464654  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:48:59.499573  172785 cri.go:89] found id: ""
	I0210 11:48:59.499613  172785 logs.go:282] 0 containers: []
	W0210 11:48:59.499623  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:48:59.499636  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:48:59.499653  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:48:59.512084  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:48:59.512122  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:48:59.578284  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:48:59.578309  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:48:59.578320  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:48:59.649662  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:48:59.649705  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:48:59.685753  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:48:59.685783  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:02.235725  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:02.247887  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:02.247944  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:02.281974  172785 cri.go:89] found id: ""
	I0210 11:49:02.282001  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.282009  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:02.282015  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:02.282062  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:02.315825  172785 cri.go:89] found id: ""
	I0210 11:49:02.315856  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.315863  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:02.315870  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:02.315932  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:02.347104  172785 cri.go:89] found id: ""
	I0210 11:49:02.347138  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.347149  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:02.347158  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:02.347240  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:02.381335  172785 cri.go:89] found id: ""
	I0210 11:49:02.381366  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.381375  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:02.381381  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:02.381441  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:02.414867  172785 cri.go:89] found id: ""
	I0210 11:49:02.414895  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.414903  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:02.414909  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:02.414974  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:02.452189  172785 cri.go:89] found id: ""
	I0210 11:49:02.452218  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.452226  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:02.452232  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:02.452282  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:02.486881  172785 cri.go:89] found id: ""
	I0210 11:49:02.486906  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.486915  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:02.486921  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:02.486968  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:02.519236  172785 cri.go:89] found id: ""
	I0210 11:49:02.519264  172785 logs.go:282] 0 containers: []
	W0210 11:49:02.519274  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:02.519286  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:02.519304  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:02.589337  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:02.589368  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:02.589384  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:02.664473  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:02.664509  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:02.702447  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:02.702483  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:02.753512  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:02.753546  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:05.266570  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:05.280620  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:05.280679  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:05.312248  172785 cri.go:89] found id: ""
	I0210 11:49:05.312283  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.312295  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:05.312303  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:05.312368  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:05.343828  172785 cri.go:89] found id: ""
	I0210 11:49:05.343862  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.343872  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:05.343880  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:05.343956  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:05.375537  172785 cri.go:89] found id: ""
	I0210 11:49:05.375565  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.375574  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:05.375579  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:05.375630  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:05.409304  172785 cri.go:89] found id: ""
	I0210 11:49:05.409335  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.409345  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:05.409353  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:05.409424  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:05.443943  172785 cri.go:89] found id: ""
	I0210 11:49:05.443978  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.443987  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:05.443993  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:05.444046  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:05.476840  172785 cri.go:89] found id: ""
	I0210 11:49:05.476870  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.476878  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:05.476885  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:05.476953  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:05.511779  172785 cri.go:89] found id: ""
	I0210 11:49:05.511811  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.511822  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:05.511830  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:05.511884  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:05.545430  172785 cri.go:89] found id: ""
	I0210 11:49:05.545463  172785 logs.go:282] 0 containers: []
	W0210 11:49:05.545471  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:05.545481  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:05.545491  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:05.592942  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:05.592977  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:05.605556  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:05.605586  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:05.672858  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:05.672887  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:05.672900  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:05.748150  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:05.748193  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:08.287549  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:08.301807  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:08.301874  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:08.334318  172785 cri.go:89] found id: ""
	I0210 11:49:08.334353  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.334366  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:08.334383  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:08.334442  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:08.368433  172785 cri.go:89] found id: ""
	I0210 11:49:08.368461  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.368472  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:08.368480  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:08.368537  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:08.400779  172785 cri.go:89] found id: ""
	I0210 11:49:08.400808  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.400817  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:08.400824  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:08.400883  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:08.437862  172785 cri.go:89] found id: ""
	I0210 11:49:08.437898  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.437910  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:08.437918  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:08.437968  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:08.473115  172785 cri.go:89] found id: ""
	I0210 11:49:08.473144  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.473169  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:08.473177  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:08.473256  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:08.504160  172785 cri.go:89] found id: ""
	I0210 11:49:08.504192  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.504202  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:08.504208  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:08.504265  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:08.536967  172785 cri.go:89] found id: ""
	I0210 11:49:08.537001  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.537013  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:08.537020  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:08.537072  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:08.567866  172785 cri.go:89] found id: ""
	I0210 11:49:08.567895  172785 logs.go:282] 0 containers: []
	W0210 11:49:08.567904  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:08.567914  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:08.567925  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:08.616547  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:08.616582  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:08.630744  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:08.630778  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:08.707201  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:08.707226  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:08.707241  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:08.785110  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:08.785150  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:11.327348  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:11.340150  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:11.340211  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:11.375220  172785 cri.go:89] found id: ""
	I0210 11:49:11.375259  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.375268  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:11.375276  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:11.375350  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:11.406656  172785 cri.go:89] found id: ""
	I0210 11:49:11.406683  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.406691  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:11.406698  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:11.406758  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:11.440080  172785 cri.go:89] found id: ""
	I0210 11:49:11.440117  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.440127  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:11.440133  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:11.440199  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:11.475951  172785 cri.go:89] found id: ""
	I0210 11:49:11.475979  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.475987  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:11.475993  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:11.476055  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:11.511495  172785 cri.go:89] found id: ""
	I0210 11:49:11.511522  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.511530  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:11.511536  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:11.511584  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:11.544722  172785 cri.go:89] found id: ""
	I0210 11:49:11.544757  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.544768  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:11.544778  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:11.544843  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:11.579490  172785 cri.go:89] found id: ""
	I0210 11:49:11.579514  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.579523  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:11.579528  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:11.579581  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:11.611036  172785 cri.go:89] found id: ""
	I0210 11:49:11.611073  172785 logs.go:282] 0 containers: []
	W0210 11:49:11.611085  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:11.611097  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:11.611111  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:11.664791  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:11.664831  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:11.677950  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:11.677982  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:11.753087  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:11.753117  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:11.753133  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:11.832966  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:11.833014  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:14.371513  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:14.384278  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:14.384358  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:14.420239  172785 cri.go:89] found id: ""
	I0210 11:49:14.420268  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.420277  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:14.420283  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:14.420336  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:14.452100  172785 cri.go:89] found id: ""
	I0210 11:49:14.452131  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.452140  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:14.452146  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:14.452207  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:14.483373  172785 cri.go:89] found id: ""
	I0210 11:49:14.483413  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.483425  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:14.483434  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:14.483504  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:14.516396  172785 cri.go:89] found id: ""
	I0210 11:49:14.516432  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.516443  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:14.516452  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:14.516523  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:14.548165  172785 cri.go:89] found id: ""
	I0210 11:49:14.548199  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.548211  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:14.548218  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:14.548279  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:14.580912  172785 cri.go:89] found id: ""
	I0210 11:49:14.580943  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.580953  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:14.580962  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:14.581026  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:14.613195  172785 cri.go:89] found id: ""
	I0210 11:49:14.613227  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.613235  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:14.613241  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:14.613295  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:14.648233  172785 cri.go:89] found id: ""
	I0210 11:49:14.648265  172785 logs.go:282] 0 containers: []
	W0210 11:49:14.648276  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:14.648289  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:14.648305  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:14.720954  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:14.720984  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:14.720997  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:14.804882  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:14.804934  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:14.844833  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:14.844871  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:14.901452  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:14.901490  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:17.415332  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:17.427930  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:17.427994  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:17.459822  172785 cri.go:89] found id: ""
	I0210 11:49:17.459853  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.459862  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:17.459868  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:17.459929  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:17.493131  172785 cri.go:89] found id: ""
	I0210 11:49:17.493171  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.493183  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:17.493191  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:17.493245  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:17.525327  172785 cri.go:89] found id: ""
	I0210 11:49:17.525357  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.525367  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:17.525373  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:17.525444  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:17.561982  172785 cri.go:89] found id: ""
	I0210 11:49:17.562018  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.562030  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:17.562038  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:17.562092  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:17.598697  172785 cri.go:89] found id: ""
	I0210 11:49:17.598723  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.598730  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:17.598736  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:17.598786  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:17.632419  172785 cri.go:89] found id: ""
	I0210 11:49:17.632445  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.632453  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:17.632459  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:17.632519  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:17.666775  172785 cri.go:89] found id: ""
	I0210 11:49:17.666803  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.666811  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:17.666817  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:17.666877  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:17.706418  172785 cri.go:89] found id: ""
	I0210 11:49:17.706444  172785 logs.go:282] 0 containers: []
	W0210 11:49:17.706451  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:17.706460  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:17.706473  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:17.755648  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:17.755682  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:17.768725  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:17.768758  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:17.832799  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:17.832822  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:17.832834  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:17.906692  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:17.906736  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:20.444482  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:20.457432  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:20.457493  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:20.489890  172785 cri.go:89] found id: ""
	I0210 11:49:20.489920  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.489928  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:20.489934  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:20.489985  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:20.521305  172785 cri.go:89] found id: ""
	I0210 11:49:20.521336  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.521345  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:20.521354  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:20.521428  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:20.557445  172785 cri.go:89] found id: ""
	I0210 11:49:20.557475  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.557483  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:20.557490  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:20.557542  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:20.593544  172785 cri.go:89] found id: ""
	I0210 11:49:20.593575  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.593582  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:20.593588  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:20.593651  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:20.626185  172785 cri.go:89] found id: ""
	I0210 11:49:20.626213  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.626220  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:20.626226  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:20.626283  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:20.670013  172785 cri.go:89] found id: ""
	I0210 11:49:20.670046  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.670059  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:20.670066  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:20.670133  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:20.705986  172785 cri.go:89] found id: ""
	I0210 11:49:20.706026  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.706037  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:20.706044  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:20.706111  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:20.743174  172785 cri.go:89] found id: ""
	I0210 11:49:20.743220  172785 logs.go:282] 0 containers: []
	W0210 11:49:20.743232  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:20.743247  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:20.743259  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:20.780341  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:20.780369  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:20.849632  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:20.849672  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:20.862160  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:20.862189  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:20.925898  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:20.925933  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:20.925948  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:23.504432  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:23.518953  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:23.519029  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:23.552789  172785 cri.go:89] found id: ""
	I0210 11:49:23.552821  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.552831  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:23.552840  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:23.552904  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:23.585796  172785 cri.go:89] found id: ""
	I0210 11:49:23.585825  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.585834  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:23.585840  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:23.585899  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:23.616315  172785 cri.go:89] found id: ""
	I0210 11:49:23.616344  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.616353  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:23.616360  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:23.616417  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:23.649866  172785 cri.go:89] found id: ""
	I0210 11:49:23.649896  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.649906  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:23.649916  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:23.649972  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:23.683915  172785 cri.go:89] found id: ""
	I0210 11:49:23.683944  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.683954  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:23.683966  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:23.684033  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:23.722447  172785 cri.go:89] found id: ""
	I0210 11:49:23.722484  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.722496  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:23.722505  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:23.722567  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:23.755785  172785 cri.go:89] found id: ""
	I0210 11:49:23.755821  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.755830  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:23.755836  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:23.755894  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:23.789678  172785 cri.go:89] found id: ""
	I0210 11:49:23.789708  172785 logs.go:282] 0 containers: []
	W0210 11:49:23.789719  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:23.789731  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:23.789748  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:23.839617  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:23.839658  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:23.853254  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:23.853293  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:23.920156  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:23.920179  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:23.920192  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:23.999766  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:23.999809  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:26.538832  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:26.551441  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:26.551501  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:26.584732  172785 cri.go:89] found id: ""
	I0210 11:49:26.584768  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.584779  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:26.584789  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:26.584862  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:26.629130  172785 cri.go:89] found id: ""
	I0210 11:49:26.629161  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.629169  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:26.629175  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:26.629234  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:26.661500  172785 cri.go:89] found id: ""
	I0210 11:49:26.661531  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.661541  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:26.661548  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:26.661611  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:26.694917  172785 cri.go:89] found id: ""
	I0210 11:49:26.694951  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.694963  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:26.694971  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:26.695035  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:26.728362  172785 cri.go:89] found id: ""
	I0210 11:49:26.728398  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.728409  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:26.728417  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:26.728487  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:26.760549  172785 cri.go:89] found id: ""
	I0210 11:49:26.760579  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.760589  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:26.760597  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:26.760664  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:26.792727  172785 cri.go:89] found id: ""
	I0210 11:49:26.792763  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.792774  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:26.792781  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:26.792844  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:26.824434  172785 cri.go:89] found id: ""
	I0210 11:49:26.824468  172785 logs.go:282] 0 containers: []
	W0210 11:49:26.824480  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:26.824494  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:26.824508  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:26.881094  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:26.881135  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:26.893891  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:26.893920  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:26.965331  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:26.965354  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:26.965366  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:27.045770  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:27.045814  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:29.583329  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:29.596489  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:29.596559  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:29.627963  172785 cri.go:89] found id: ""
	I0210 11:49:29.627991  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.627999  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:29.628005  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:29.628051  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:29.660509  172785 cri.go:89] found id: ""
	I0210 11:49:29.660541  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.660549  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:29.660555  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:29.660619  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:29.691850  172785 cri.go:89] found id: ""
	I0210 11:49:29.691881  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.691892  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:29.691899  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:29.691961  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:29.727227  172785 cri.go:89] found id: ""
	I0210 11:49:29.727256  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.727266  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:29.727274  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:29.727341  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:29.759160  172785 cri.go:89] found id: ""
	I0210 11:49:29.759209  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.759221  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:29.759229  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:29.759285  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:29.790430  172785 cri.go:89] found id: ""
	I0210 11:49:29.790463  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.790474  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:29.790484  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:29.790536  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:29.824534  172785 cri.go:89] found id: ""
	I0210 11:49:29.824568  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.824579  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:29.824587  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:29.824652  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:29.855373  172785 cri.go:89] found id: ""
	I0210 11:49:29.855406  172785 logs.go:282] 0 containers: []
	W0210 11:49:29.855419  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:29.855431  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:29.855454  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:29.906506  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:29.906545  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:29.918886  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:29.918913  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:29.988006  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:29.988030  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:29.988047  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:30.066838  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:30.066876  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:32.605620  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:32.618360  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:32.618438  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:32.651489  172785 cri.go:89] found id: ""
	I0210 11:49:32.651518  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.651529  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:32.651538  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:32.651593  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:32.687255  172785 cri.go:89] found id: ""
	I0210 11:49:32.687290  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.687301  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:32.687309  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:32.687383  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:32.721208  172785 cri.go:89] found id: ""
	I0210 11:49:32.721239  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.721249  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:32.721255  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:32.721319  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:32.754338  172785 cri.go:89] found id: ""
	I0210 11:49:32.754372  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.754382  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:32.754389  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:32.754454  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:32.793899  172785 cri.go:89] found id: ""
	I0210 11:49:32.793929  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.793939  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:32.793947  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:32.794009  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:32.826202  172785 cri.go:89] found id: ""
	I0210 11:49:32.826237  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.826249  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:32.826256  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:32.826322  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:32.866854  172785 cri.go:89] found id: ""
	I0210 11:49:32.866889  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.866900  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:32.866909  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:32.866970  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:32.898863  172785 cri.go:89] found id: ""
	I0210 11:49:32.898897  172785 logs.go:282] 0 containers: []
	W0210 11:49:32.898909  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:32.898922  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:32.898934  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:32.951888  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:32.951929  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:32.971493  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:32.971526  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:33.054868  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:33.054893  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:33.054908  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:33.134497  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:33.134538  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:35.674645  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:35.687269  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:35.687336  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:35.718319  172785 cri.go:89] found id: ""
	I0210 11:49:35.718349  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.718359  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:35.718367  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:35.718427  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:35.753864  172785 cri.go:89] found id: ""
	I0210 11:49:35.753898  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.753908  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:35.753916  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:35.753982  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:35.785201  172785 cri.go:89] found id: ""
	I0210 11:49:35.785231  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.785239  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:35.785246  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:35.785302  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:35.816554  172785 cri.go:89] found id: ""
	I0210 11:49:35.816586  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.816596  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:35.816603  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:35.816653  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:35.846903  172785 cri.go:89] found id: ""
	I0210 11:49:35.846933  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.846942  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:35.846949  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:35.847016  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:35.881234  172785 cri.go:89] found id: ""
	I0210 11:49:35.881266  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.881274  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:35.881280  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:35.881336  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:35.911719  172785 cri.go:89] found id: ""
	I0210 11:49:35.911748  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.911755  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:35.911762  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:35.911816  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:35.944032  172785 cri.go:89] found id: ""
	I0210 11:49:35.944058  172785 logs.go:282] 0 containers: []
	W0210 11:49:35.944067  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:35.944076  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:35.944089  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:35.980364  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:35.980392  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:36.028708  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:36.028739  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:36.041958  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:36.041997  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:36.116896  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:36.116942  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:36.116957  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:38.695168  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:38.710637  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:38.710696  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:38.757596  172785 cri.go:89] found id: ""
	I0210 11:49:38.757634  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.757646  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:38.757654  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:38.757718  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:38.801603  172785 cri.go:89] found id: ""
	I0210 11:49:38.801638  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.801651  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:38.801659  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:38.801720  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:38.835565  172785 cri.go:89] found id: ""
	I0210 11:49:38.835593  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.835601  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:38.835606  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:38.835672  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:38.868708  172785 cri.go:89] found id: ""
	I0210 11:49:38.868741  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.868753  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:38.868760  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:38.868823  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:38.900744  172785 cri.go:89] found id: ""
	I0210 11:49:38.900776  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.900787  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:38.900795  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:38.900858  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:38.930856  172785 cri.go:89] found id: ""
	I0210 11:49:38.930892  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.930903  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:38.930911  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:38.930967  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:38.963296  172785 cri.go:89] found id: ""
	I0210 11:49:38.963329  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.963340  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:38.963348  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:38.963415  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:38.994932  172785 cri.go:89] found id: ""
	I0210 11:49:38.994958  172785 logs.go:282] 0 containers: []
	W0210 11:49:38.994965  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:38.994974  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:38.994986  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:39.032501  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:39.032533  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:39.082357  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:39.082391  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:39.095446  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:39.095480  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:39.161592  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:39.161614  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:39.161626  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:41.741448  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:41.754592  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:41.754673  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:41.798576  172785 cri.go:89] found id: ""
	I0210 11:49:41.798616  172785 logs.go:282] 0 containers: []
	W0210 11:49:41.798624  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:41.798630  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:41.798715  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:41.831961  172785 cri.go:89] found id: ""
	I0210 11:49:41.832007  172785 logs.go:282] 0 containers: []
	W0210 11:49:41.832018  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:41.832028  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:41.832102  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:41.867059  172785 cri.go:89] found id: ""
	I0210 11:49:41.867099  172785 logs.go:282] 0 containers: []
	W0210 11:49:41.867111  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:41.867120  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:41.867208  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:41.907643  172785 cri.go:89] found id: ""
	I0210 11:49:41.907670  172785 logs.go:282] 0 containers: []
	W0210 11:49:41.907681  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:41.907690  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:41.907756  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:41.942435  172785 cri.go:89] found id: ""
	I0210 11:49:41.942475  172785 logs.go:282] 0 containers: []
	W0210 11:49:41.942487  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:41.942495  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:41.942564  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:41.980196  172785 cri.go:89] found id: ""
	I0210 11:49:41.980222  172785 logs.go:282] 0 containers: []
	W0210 11:49:41.980233  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:41.980241  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:41.980306  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:42.021558  172785 cri.go:89] found id: ""
	I0210 11:49:42.021595  172785 logs.go:282] 0 containers: []
	W0210 11:49:42.021607  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:42.021615  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:42.021683  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:42.062585  172785 cri.go:89] found id: ""
	I0210 11:49:42.062625  172785 logs.go:282] 0 containers: []
	W0210 11:49:42.062638  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:42.062652  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:42.062667  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:42.129149  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:42.129189  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:42.143765  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:42.143796  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:42.210618  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:42.210640  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:42.210652  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:42.288768  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:42.288814  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:44.827291  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:44.839517  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:44.839586  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:44.869997  172785 cri.go:89] found id: ""
	I0210 11:49:44.870023  172785 logs.go:282] 0 containers: []
	W0210 11:49:44.870031  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:44.870038  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:44.870098  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:44.900813  172785 cri.go:89] found id: ""
	I0210 11:49:44.900848  172785 logs.go:282] 0 containers: []
	W0210 11:49:44.900859  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:44.900868  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:44.900930  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:44.937728  172785 cri.go:89] found id: ""
	I0210 11:49:44.937759  172785 logs.go:282] 0 containers: []
	W0210 11:49:44.937767  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:44.937773  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:44.937821  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:44.966784  172785 cri.go:89] found id: ""
	I0210 11:49:44.966821  172785 logs.go:282] 0 containers: []
	W0210 11:49:44.966830  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:44.966836  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:44.966888  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:44.997080  172785 cri.go:89] found id: ""
	I0210 11:49:44.997114  172785 logs.go:282] 0 containers: []
	W0210 11:49:44.997127  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:44.997136  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:44.997198  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:45.030438  172785 cri.go:89] found id: ""
	I0210 11:49:45.030476  172785 logs.go:282] 0 containers: []
	W0210 11:49:45.030489  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:45.030498  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:45.030560  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:45.061185  172785 cri.go:89] found id: ""
	I0210 11:49:45.061215  172785 logs.go:282] 0 containers: []
	W0210 11:49:45.061224  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:45.061229  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:45.061279  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:45.091697  172785 cri.go:89] found id: ""
	I0210 11:49:45.091732  172785 logs.go:282] 0 containers: []
	W0210 11:49:45.091744  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:45.091757  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:45.091775  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:45.142610  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:45.142645  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:45.155965  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:45.156000  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:45.228917  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:45.228943  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:45.228958  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:45.312084  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:45.312125  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:47.855610  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:47.867981  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:47.868043  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:47.900416  172785 cri.go:89] found id: ""
	I0210 11:49:47.900449  172785 logs.go:282] 0 containers: []
	W0210 11:49:47.900457  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:47.900463  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:47.900520  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:47.931864  172785 cri.go:89] found id: ""
	I0210 11:49:47.931897  172785 logs.go:282] 0 containers: []
	W0210 11:49:47.931907  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:47.931914  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:47.931964  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:47.964686  172785 cri.go:89] found id: ""
	I0210 11:49:47.964721  172785 logs.go:282] 0 containers: []
	W0210 11:49:47.964733  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:47.964740  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:47.964802  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:47.997286  172785 cri.go:89] found id: ""
	I0210 11:49:47.997325  172785 logs.go:282] 0 containers: []
	W0210 11:49:47.997337  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:47.997343  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:47.997399  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:48.027208  172785 cri.go:89] found id: ""
	I0210 11:49:48.027240  172785 logs.go:282] 0 containers: []
	W0210 11:49:48.027249  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:48.027255  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:48.027312  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:48.061233  172785 cri.go:89] found id: ""
	I0210 11:49:48.061276  172785 logs.go:282] 0 containers: []
	W0210 11:49:48.061286  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:48.061293  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:48.061357  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:48.092805  172785 cri.go:89] found id: ""
	I0210 11:49:48.092838  172785 logs.go:282] 0 containers: []
	W0210 11:49:48.092848  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:48.092865  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:48.092927  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:48.125008  172785 cri.go:89] found id: ""
	I0210 11:49:48.125036  172785 logs.go:282] 0 containers: []
	W0210 11:49:48.125045  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:48.125057  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:48.125073  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:48.160740  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:48.160772  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:48.212437  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:48.212470  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:48.227102  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:48.227139  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:48.297374  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:48.297400  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:48.297415  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:50.879309  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:50.892368  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:50.892445  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:50.924622  172785 cri.go:89] found id: ""
	I0210 11:49:50.924659  172785 logs.go:282] 0 containers: []
	W0210 11:49:50.924669  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:50.924675  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:50.924727  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:50.957611  172785 cri.go:89] found id: ""
	I0210 11:49:50.957643  172785 logs.go:282] 0 containers: []
	W0210 11:49:50.957652  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:50.957657  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:50.957708  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:50.988923  172785 cri.go:89] found id: ""
	I0210 11:49:50.988960  172785 logs.go:282] 0 containers: []
	W0210 11:49:50.988972  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:50.988980  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:50.989042  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:51.024325  172785 cri.go:89] found id: ""
	I0210 11:49:51.024354  172785 logs.go:282] 0 containers: []
	W0210 11:49:51.024362  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:51.024369  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:51.024427  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:51.057140  172785 cri.go:89] found id: ""
	I0210 11:49:51.057179  172785 logs.go:282] 0 containers: []
	W0210 11:49:51.057187  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:51.057198  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:51.057250  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:51.089322  172785 cri.go:89] found id: ""
	I0210 11:49:51.089353  172785 logs.go:282] 0 containers: []
	W0210 11:49:51.089362  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:51.089368  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:51.089448  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:51.121581  172785 cri.go:89] found id: ""
	I0210 11:49:51.121617  172785 logs.go:282] 0 containers: []
	W0210 11:49:51.121629  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:51.121637  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:51.121695  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:51.155031  172785 cri.go:89] found id: ""
	I0210 11:49:51.155070  172785 logs.go:282] 0 containers: []
	W0210 11:49:51.155084  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:51.155098  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:51.155114  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:51.202891  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:51.202926  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:51.215735  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:51.215769  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:51.285982  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:51.286010  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:51.286023  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:51.360345  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:51.360385  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:53.896214  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:53.908428  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:53.908483  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:53.945578  172785 cri.go:89] found id: ""
	I0210 11:49:53.945622  172785 logs.go:282] 0 containers: []
	W0210 11:49:53.945636  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:53.945645  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:53.945729  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:53.977777  172785 cri.go:89] found id: ""
	I0210 11:49:53.977805  172785 logs.go:282] 0 containers: []
	W0210 11:49:53.977813  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:53.977819  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:53.977876  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:54.009995  172785 cri.go:89] found id: ""
	I0210 11:49:54.010024  172785 logs.go:282] 0 containers: []
	W0210 11:49:54.010032  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:54.010039  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:54.010104  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:54.046026  172785 cri.go:89] found id: ""
	I0210 11:49:54.046057  172785 logs.go:282] 0 containers: []
	W0210 11:49:54.046066  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:54.046072  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:54.046125  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:54.077970  172785 cri.go:89] found id: ""
	I0210 11:49:54.077999  172785 logs.go:282] 0 containers: []
	W0210 11:49:54.078007  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:54.078013  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:54.078065  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:54.108828  172785 cri.go:89] found id: ""
	I0210 11:49:54.108863  172785 logs.go:282] 0 containers: []
	W0210 11:49:54.108875  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:54.108884  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:54.108948  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:54.145354  172785 cri.go:89] found id: ""
	I0210 11:49:54.145382  172785 logs.go:282] 0 containers: []
	W0210 11:49:54.145390  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:54.145396  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:54.145463  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:54.179962  172785 cri.go:89] found id: ""
	I0210 11:49:54.179991  172785 logs.go:282] 0 containers: []
	W0210 11:49:54.179999  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:54.180008  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:54.180019  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:54.257599  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:54.257642  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:54.302802  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:54.302833  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:54.353288  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:54.353325  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:54.367490  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:54.367539  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:54.438306  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:56.938625  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:56.952051  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:56.952135  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:49:56.989642  172785 cri.go:89] found id: ""
	I0210 11:49:56.989680  172785 logs.go:282] 0 containers: []
	W0210 11:49:56.989693  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:49:56.989701  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:49:56.989768  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:49:57.026281  172785 cri.go:89] found id: ""
	I0210 11:49:57.026321  172785 logs.go:282] 0 containers: []
	W0210 11:49:57.026334  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:49:57.026344  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:49:57.026410  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:49:57.060422  172785 cri.go:89] found id: ""
	I0210 11:49:57.060456  172785 logs.go:282] 0 containers: []
	W0210 11:49:57.060468  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:49:57.060476  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:49:57.060538  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:49:57.092797  172785 cri.go:89] found id: ""
	I0210 11:49:57.092829  172785 logs.go:282] 0 containers: []
	W0210 11:49:57.092840  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:49:57.092848  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:49:57.092913  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:49:57.126413  172785 cri.go:89] found id: ""
	I0210 11:49:57.126446  172785 logs.go:282] 0 containers: []
	W0210 11:49:57.126457  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:49:57.126465  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:49:57.126531  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:49:57.159311  172785 cri.go:89] found id: ""
	I0210 11:49:57.159341  172785 logs.go:282] 0 containers: []
	W0210 11:49:57.159352  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:49:57.159360  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:49:57.159417  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:49:57.191097  172785 cri.go:89] found id: ""
	I0210 11:49:57.191142  172785 logs.go:282] 0 containers: []
	W0210 11:49:57.191153  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:49:57.191162  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:49:57.191239  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:49:57.223390  172785 cri.go:89] found id: ""
	I0210 11:49:57.223429  172785 logs.go:282] 0 containers: []
	W0210 11:49:57.223440  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:49:57.223455  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:49:57.223471  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:49:57.273838  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:49:57.273873  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:49:57.287888  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:49:57.287926  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:49:57.354059  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:49:57.354085  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:49:57.354109  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:49:57.439290  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:49:57.439332  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:49:59.979541  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:49:59.992842  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:49:59.992930  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:00.028949  172785 cri.go:89] found id: ""
	I0210 11:50:00.028983  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.028994  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:00.029001  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:00.029065  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:00.066456  172785 cri.go:89] found id: ""
	I0210 11:50:00.066493  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.066504  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:00.066512  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:00.066585  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:00.110513  172785 cri.go:89] found id: ""
	I0210 11:50:00.110553  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.110565  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:00.110573  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:00.110641  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:00.149366  172785 cri.go:89] found id: ""
	I0210 11:50:00.149398  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.149407  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:00.149413  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:00.149464  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:00.188260  172785 cri.go:89] found id: ""
	I0210 11:50:00.188289  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.188297  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:00.188303  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:00.188351  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:00.223024  172785 cri.go:89] found id: ""
	I0210 11:50:00.223063  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.223075  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:00.223083  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:00.223143  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:00.257954  172785 cri.go:89] found id: ""
	I0210 11:50:00.257986  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.257998  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:00.258013  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:00.258073  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:00.290280  172785 cri.go:89] found id: ""
	I0210 11:50:00.290313  172785 logs.go:282] 0 containers: []
	W0210 11:50:00.290321  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:00.290331  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:00.290342  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:00.338906  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:00.338939  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:00.352163  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:00.352191  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:00.427024  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:00.427050  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:00.427069  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:00.511495  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:00.511527  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:03.051975  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:03.070052  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:03.070143  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:03.117208  172785 cri.go:89] found id: ""
	I0210 11:50:03.117239  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.117250  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:03.117258  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:03.117317  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:03.156536  172785 cri.go:89] found id: ""
	I0210 11:50:03.156568  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.156579  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:03.156587  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:03.156647  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:03.201152  172785 cri.go:89] found id: ""
	I0210 11:50:03.201181  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.201190  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:03.201197  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:03.201254  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:03.246753  172785 cri.go:89] found id: ""
	I0210 11:50:03.246782  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.246792  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:03.246800  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:03.246857  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:03.294909  172785 cri.go:89] found id: ""
	I0210 11:50:03.294938  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.294949  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:03.294958  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:03.295013  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:03.341379  172785 cri.go:89] found id: ""
	I0210 11:50:03.341408  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.341417  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:03.341425  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:03.341480  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:03.377836  172785 cri.go:89] found id: ""
	I0210 11:50:03.377868  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.377880  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:03.377888  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:03.377946  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:03.418727  172785 cri.go:89] found id: ""
	I0210 11:50:03.418759  172785 logs.go:282] 0 containers: []
	W0210 11:50:03.418769  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:03.418783  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:03.418798  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:03.481142  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:03.481185  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:03.500362  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:03.500401  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:03.599869  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:03.599894  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:03.599909  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:03.683890  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:03.683931  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:06.230769  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:06.250418  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:06.250495  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:06.296253  172785 cri.go:89] found id: ""
	I0210 11:50:06.296288  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.296301  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:06.296311  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:06.296377  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:06.341371  172785 cri.go:89] found id: ""
	I0210 11:50:06.341405  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.341418  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:06.341427  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:06.341486  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:06.386584  172785 cri.go:89] found id: ""
	I0210 11:50:06.386623  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.386634  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:06.386643  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:06.386715  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:06.431781  172785 cri.go:89] found id: ""
	I0210 11:50:06.431821  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.431834  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:06.431844  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:06.431909  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:06.476324  172785 cri.go:89] found id: ""
	I0210 11:50:06.476362  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.476375  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:06.476384  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:06.476454  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:06.525364  172785 cri.go:89] found id: ""
	I0210 11:50:06.525407  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.525420  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:06.525432  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:06.525502  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:06.570869  172785 cri.go:89] found id: ""
	I0210 11:50:06.570902  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.570912  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:06.570920  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:06.570980  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:06.609360  172785 cri.go:89] found id: ""
	I0210 11:50:06.609391  172785 logs.go:282] 0 containers: []
	W0210 11:50:06.609402  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:06.609413  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:06.609430  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:06.660471  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:06.660510  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:06.728977  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:06.729018  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:06.744175  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:06.744208  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:06.847705  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:06.847739  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:06.847758  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:09.433925  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:09.453275  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:09.453366  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:09.493033  172785 cri.go:89] found id: ""
	I0210 11:50:09.493088  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.493101  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:09.493111  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:09.493180  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:09.538687  172785 cri.go:89] found id: ""
	I0210 11:50:09.538719  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.538731  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:09.538739  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:09.538800  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:09.574077  172785 cri.go:89] found id: ""
	I0210 11:50:09.574114  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.574135  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:09.574143  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:09.574221  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:09.620206  172785 cri.go:89] found id: ""
	I0210 11:50:09.620240  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.620251  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:09.620260  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:09.620341  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:09.666449  172785 cri.go:89] found id: ""
	I0210 11:50:09.666487  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.666500  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:09.666507  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:09.666571  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:09.710037  172785 cri.go:89] found id: ""
	I0210 11:50:09.710067  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.710080  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:09.710092  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:09.710165  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:09.750797  172785 cri.go:89] found id: ""
	I0210 11:50:09.750831  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.750842  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:09.750849  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:09.750910  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:09.790234  172785 cri.go:89] found id: ""
	I0210 11:50:09.790270  172785 logs.go:282] 0 containers: []
	W0210 11:50:09.790281  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:09.790293  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:09.790308  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:09.878507  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:09.878537  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:09.878554  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:09.962295  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:09.962339  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:10.005039  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:10.005075  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:10.069908  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:10.069964  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:12.587373  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:12.600861  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:12.600938  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:12.635169  172785 cri.go:89] found id: ""
	I0210 11:50:12.635213  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.635225  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:12.635233  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:12.635298  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:12.667092  172785 cri.go:89] found id: ""
	I0210 11:50:12.667122  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.667133  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:12.667142  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:12.667218  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:12.700358  172785 cri.go:89] found id: ""
	I0210 11:50:12.700396  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.700404  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:12.700410  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:12.700462  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:12.733223  172785 cri.go:89] found id: ""
	I0210 11:50:12.733261  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.733272  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:12.733280  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:12.733334  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:12.777019  172785 cri.go:89] found id: ""
	I0210 11:50:12.777050  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.777060  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:12.777068  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:12.777136  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:12.814249  172785 cri.go:89] found id: ""
	I0210 11:50:12.814284  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.814294  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:12.814301  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:12.814366  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:12.849706  172785 cri.go:89] found id: ""
	I0210 11:50:12.849738  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.849746  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:12.849752  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:12.849813  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:12.885236  172785 cri.go:89] found id: ""
	I0210 11:50:12.885276  172785 logs.go:282] 0 containers: []
	W0210 11:50:12.885294  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:12.885307  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:12.885322  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:12.944232  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:12.944271  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:12.957120  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:12.957156  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:13.028946  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:13.028967  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:13.028981  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:13.106099  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:13.106145  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:15.645979  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:15.661753  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:15.661839  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:15.708709  172785 cri.go:89] found id: ""
	I0210 11:50:15.708743  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.708755  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:15.708764  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:15.708827  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:15.747141  172785 cri.go:89] found id: ""
	I0210 11:50:15.747178  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.747211  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:15.747222  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:15.747275  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:15.791457  172785 cri.go:89] found id: ""
	I0210 11:50:15.791496  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.791507  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:15.791515  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:15.791595  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:15.833573  172785 cri.go:89] found id: ""
	I0210 11:50:15.833607  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.833618  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:15.833627  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:15.833698  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:15.870759  172785 cri.go:89] found id: ""
	I0210 11:50:15.870795  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.870806  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:15.870815  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:15.870883  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:15.904577  172785 cri.go:89] found id: ""
	I0210 11:50:15.904607  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.904618  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:15.904626  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:15.904694  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:15.946533  172785 cri.go:89] found id: ""
	I0210 11:50:15.946568  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.946580  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:15.946588  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:15.946663  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:15.985218  172785 cri.go:89] found id: ""
	I0210 11:50:15.985250  172785 logs.go:282] 0 containers: []
	W0210 11:50:15.985262  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:15.985275  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:15.985288  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:16.000608  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:16.000649  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:16.077578  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:16.077604  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:16.077621  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:16.155165  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:16.155215  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:16.201096  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:16.201134  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:18.760408  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:18.773594  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:18.773670  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:18.812866  172785 cri.go:89] found id: ""
	I0210 11:50:18.812903  172785 logs.go:282] 0 containers: []
	W0210 11:50:18.812915  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:18.812923  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:18.812990  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:18.845722  172785 cri.go:89] found id: ""
	I0210 11:50:18.845758  172785 logs.go:282] 0 containers: []
	W0210 11:50:18.845771  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:18.845779  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:18.845844  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:18.891970  172785 cri.go:89] found id: ""
	I0210 11:50:18.892002  172785 logs.go:282] 0 containers: []
	W0210 11:50:18.892014  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:18.892023  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:18.892082  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:18.959362  172785 cri.go:89] found id: ""
	I0210 11:50:18.959405  172785 logs.go:282] 0 containers: []
	W0210 11:50:18.959417  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:18.959425  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:18.959492  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:19.014917  172785 cri.go:89] found id: ""
	I0210 11:50:19.014951  172785 logs.go:282] 0 containers: []
	W0210 11:50:19.014963  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:19.014971  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:19.015042  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:19.056289  172785 cri.go:89] found id: ""
	I0210 11:50:19.056323  172785 logs.go:282] 0 containers: []
	W0210 11:50:19.056334  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:19.056342  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:19.056407  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:19.094886  172785 cri.go:89] found id: ""
	I0210 11:50:19.094918  172785 logs.go:282] 0 containers: []
	W0210 11:50:19.094929  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:19.094939  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:19.095007  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:19.129279  172785 cri.go:89] found id: ""
	I0210 11:50:19.129311  172785 logs.go:282] 0 containers: []
	W0210 11:50:19.129322  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:19.129334  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:19.129349  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:19.197547  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:19.197584  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:19.214616  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:19.214652  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:19.281666  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:19.281690  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:19.281705  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:19.362608  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:19.362642  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:21.904708  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:21.921156  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:21.921244  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:21.958208  172785 cri.go:89] found id: ""
	I0210 11:50:21.958250  172785 logs.go:282] 0 containers: []
	W0210 11:50:21.958273  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:21.958283  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:21.958353  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:22.001903  172785 cri.go:89] found id: ""
	I0210 11:50:22.001937  172785 logs.go:282] 0 containers: []
	W0210 11:50:22.001947  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:22.001955  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:22.002019  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:22.034008  172785 cri.go:89] found id: ""
	I0210 11:50:22.034045  172785 logs.go:282] 0 containers: []
	W0210 11:50:22.034059  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:22.034066  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:22.034153  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:22.065299  172785 cri.go:89] found id: ""
	I0210 11:50:22.065329  172785 logs.go:282] 0 containers: []
	W0210 11:50:22.065341  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:22.065349  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:22.065412  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:22.095996  172785 cri.go:89] found id: ""
	I0210 11:50:22.096027  172785 logs.go:282] 0 containers: []
	W0210 11:50:22.096038  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:22.096047  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:22.096118  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:22.131919  172785 cri.go:89] found id: ""
	I0210 11:50:22.131955  172785 logs.go:282] 0 containers: []
	W0210 11:50:22.131967  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:22.131974  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:22.132039  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:22.167228  172785 cri.go:89] found id: ""
	I0210 11:50:22.167265  172785 logs.go:282] 0 containers: []
	W0210 11:50:22.167276  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:22.167284  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:22.167351  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:22.208077  172785 cri.go:89] found id: ""
	I0210 11:50:22.208123  172785 logs.go:282] 0 containers: []
	W0210 11:50:22.208136  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:22.208149  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:22.208164  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:22.246146  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:22.246175  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:22.298865  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:22.298905  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:22.315208  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:22.315252  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:22.386116  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:22.386143  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:22.386160  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:24.963308  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:24.976583  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:24.976653  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:25.015816  172785 cri.go:89] found id: ""
	I0210 11:50:25.015849  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.015858  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:25.015864  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:25.015921  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:25.049272  172785 cri.go:89] found id: ""
	I0210 11:50:25.049301  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.049314  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:25.049323  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:25.049380  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:25.087389  172785 cri.go:89] found id: ""
	I0210 11:50:25.087473  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.087489  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:25.087497  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:25.087576  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:25.128399  172785 cri.go:89] found id: ""
	I0210 11:50:25.128424  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.128433  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:25.128439  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:25.128508  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:25.166062  172785 cri.go:89] found id: ""
	I0210 11:50:25.166095  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.166114  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:25.166122  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:25.166192  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:25.207333  172785 cri.go:89] found id: ""
	I0210 11:50:25.207366  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.207374  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:25.207380  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:25.207434  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:25.240406  172785 cri.go:89] found id: ""
	I0210 11:50:25.240441  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.240465  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:25.240484  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:25.240561  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:25.276142  172785 cri.go:89] found id: ""
	I0210 11:50:25.276178  172785 logs.go:282] 0 containers: []
	W0210 11:50:25.276191  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:25.276203  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:25.276217  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:25.320258  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:25.320302  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:25.385194  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:25.385242  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:25.398679  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:25.398712  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:25.465652  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:25.465678  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:25.465697  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:28.055118  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:28.069921  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:28.069996  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:28.102239  172785 cri.go:89] found id: ""
	I0210 11:50:28.102274  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.102285  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:28.102293  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:28.102357  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:28.138464  172785 cri.go:89] found id: ""
	I0210 11:50:28.138499  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.138511  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:28.138520  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:28.138586  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:28.172464  172785 cri.go:89] found id: ""
	I0210 11:50:28.172502  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.172513  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:28.172522  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:28.172589  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:28.203914  172785 cri.go:89] found id: ""
	I0210 11:50:28.203944  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.203955  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:28.203964  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:28.204039  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:28.237587  172785 cri.go:89] found id: ""
	I0210 11:50:28.237613  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.237621  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:28.237626  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:28.237680  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:28.271035  172785 cri.go:89] found id: ""
	I0210 11:50:28.271067  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.271075  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:28.271081  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:28.271140  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:28.303394  172785 cri.go:89] found id: ""
	I0210 11:50:28.303427  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.303436  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:28.303443  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:28.303507  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:28.344557  172785 cri.go:89] found id: ""
	I0210 11:50:28.344584  172785 logs.go:282] 0 containers: []
	W0210 11:50:28.344593  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:28.344602  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:28.344613  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:28.383336  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:28.383371  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:28.434703  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:28.434743  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:28.448445  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:28.448474  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:28.519376  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:28.519405  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:28.519424  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:31.095303  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:31.112506  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:31.112571  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:31.157750  172785 cri.go:89] found id: ""
	I0210 11:50:31.157782  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.157794  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:31.157803  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:31.157862  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:31.198870  172785 cri.go:89] found id: ""
	I0210 11:50:31.198896  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.198907  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:31.198915  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:31.198971  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:31.236317  172785 cri.go:89] found id: ""
	I0210 11:50:31.236350  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.236359  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:31.236368  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:31.236433  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:31.274293  172785 cri.go:89] found id: ""
	I0210 11:50:31.274341  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.274361  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:31.274370  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:31.274434  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:31.322933  172785 cri.go:89] found id: ""
	I0210 11:50:31.322960  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.322971  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:31.322980  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:31.323044  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:31.362332  172785 cri.go:89] found id: ""
	I0210 11:50:31.362371  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.362382  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:31.362393  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:31.362464  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:31.397842  172785 cri.go:89] found id: ""
	I0210 11:50:31.397879  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.397891  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:31.397899  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:31.397961  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:31.436934  172785 cri.go:89] found id: ""
	I0210 11:50:31.436968  172785 logs.go:282] 0 containers: []
	W0210 11:50:31.436978  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:31.436990  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:31.437005  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:31.499877  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:31.499913  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:31.514472  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:31.514504  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:31.585858  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:31.585896  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:31.585912  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:31.684423  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:31.684468  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:34.235359  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:34.271280  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:34.271358  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:34.328197  172785 cri.go:89] found id: ""
	I0210 11:50:34.328233  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.328246  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:34.328255  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:34.328328  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:34.377019  172785 cri.go:89] found id: ""
	I0210 11:50:34.377053  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.377065  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:34.377074  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:34.377144  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:34.426702  172785 cri.go:89] found id: ""
	I0210 11:50:34.426733  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.426744  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:34.426753  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:34.426822  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:34.487138  172785 cri.go:89] found id: ""
	I0210 11:50:34.487172  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.487197  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:34.487207  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:34.487275  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:34.554044  172785 cri.go:89] found id: ""
	I0210 11:50:34.554090  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.554102  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:34.554114  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:34.554188  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:34.611070  172785 cri.go:89] found id: ""
	I0210 11:50:34.611101  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.611112  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:34.611120  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:34.611205  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:34.663941  172785 cri.go:89] found id: ""
	I0210 11:50:34.663979  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.663990  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:34.664006  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:34.664062  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:34.713666  172785 cri.go:89] found id: ""
	I0210 11:50:34.713702  172785 logs.go:282] 0 containers: []
	W0210 11:50:34.713713  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:34.713726  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:34.713742  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:34.801707  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:34.801754  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:34.862530  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:34.862584  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:34.941736  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:34.941795  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:34.961486  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:34.961527  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:35.056545  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:37.558183  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:37.573813  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:37.573884  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:37.613151  172785 cri.go:89] found id: ""
	I0210 11:50:37.613183  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.613193  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:37.613201  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:37.613267  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:37.654269  172785 cri.go:89] found id: ""
	I0210 11:50:37.654298  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.654310  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:37.654318  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:37.654376  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:37.698079  172785 cri.go:89] found id: ""
	I0210 11:50:37.698114  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.698133  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:37.698150  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:37.698217  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:37.736172  172785 cri.go:89] found id: ""
	I0210 11:50:37.736210  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.736220  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:37.736228  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:37.736291  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:37.774816  172785 cri.go:89] found id: ""
	I0210 11:50:37.774858  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.774867  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:37.774874  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:37.774934  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:37.814253  172785 cri.go:89] found id: ""
	I0210 11:50:37.814286  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.814296  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:37.814307  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:37.814367  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:37.849152  172785 cri.go:89] found id: ""
	I0210 11:50:37.849192  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.849204  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:37.849212  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:37.849280  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:37.885104  172785 cri.go:89] found id: ""
	I0210 11:50:37.885137  172785 logs.go:282] 0 containers: []
	W0210 11:50:37.885149  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:37.885163  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:37.885181  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:37.943979  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:37.944014  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:37.960620  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:37.960669  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:38.050472  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:38.050502  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:38.050519  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:38.142170  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:38.142199  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:40.685500  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:40.698927  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:40.699006  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:40.740051  172785 cri.go:89] found id: ""
	I0210 11:50:40.740089  172785 logs.go:282] 0 containers: []
	W0210 11:50:40.740111  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:40.740119  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:40.740188  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:40.779725  172785 cri.go:89] found id: ""
	I0210 11:50:40.779759  172785 logs.go:282] 0 containers: []
	W0210 11:50:40.779768  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:40.779782  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:40.779864  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:40.815864  172785 cri.go:89] found id: ""
	I0210 11:50:40.815896  172785 logs.go:282] 0 containers: []
	W0210 11:50:40.815908  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:40.815916  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:40.815975  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:40.851477  172785 cri.go:89] found id: ""
	I0210 11:50:40.851513  172785 logs.go:282] 0 containers: []
	W0210 11:50:40.851526  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:40.851533  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:40.851597  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:40.895094  172785 cri.go:89] found id: ""
	I0210 11:50:40.895123  172785 logs.go:282] 0 containers: []
	W0210 11:50:40.895143  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:40.895152  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:40.895234  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:40.939007  172785 cri.go:89] found id: ""
	I0210 11:50:40.939038  172785 logs.go:282] 0 containers: []
	W0210 11:50:40.939050  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:40.939059  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:40.939127  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:40.984503  172785 cri.go:89] found id: ""
	I0210 11:50:40.984532  172785 logs.go:282] 0 containers: []
	W0210 11:50:40.984543  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:40.984550  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:40.984620  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:41.024792  172785 cri.go:89] found id: ""
	I0210 11:50:41.024823  172785 logs.go:282] 0 containers: []
	W0210 11:50:41.024841  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:41.024855  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:41.024871  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:41.116191  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:41.116216  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:41.116233  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:41.225433  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:41.225480  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:41.283518  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:41.283544  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:41.385325  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:41.385372  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:43.901152  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:43.918363  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:43.918445  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:43.960661  172785 cri.go:89] found id: ""
	I0210 11:50:43.960698  172785 logs.go:282] 0 containers: []
	W0210 11:50:43.960709  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:43.960718  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:43.960789  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:44.007381  172785 cri.go:89] found id: ""
	I0210 11:50:44.007418  172785 logs.go:282] 0 containers: []
	W0210 11:50:44.007431  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:44.007440  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:44.007507  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:44.046491  172785 cri.go:89] found id: ""
	I0210 11:50:44.046528  172785 logs.go:282] 0 containers: []
	W0210 11:50:44.046541  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:44.046549  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:44.046614  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:44.092793  172785 cri.go:89] found id: ""
	I0210 11:50:44.092827  172785 logs.go:282] 0 containers: []
	W0210 11:50:44.092838  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:44.092846  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:44.092906  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:44.135735  172785 cri.go:89] found id: ""
	I0210 11:50:44.135767  172785 logs.go:282] 0 containers: []
	W0210 11:50:44.135778  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:44.135787  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:44.135849  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:44.180755  172785 cri.go:89] found id: ""
	I0210 11:50:44.180790  172785 logs.go:282] 0 containers: []
	W0210 11:50:44.180801  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:44.180808  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:44.180883  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:44.226638  172785 cri.go:89] found id: ""
	I0210 11:50:44.226687  172785 logs.go:282] 0 containers: []
	W0210 11:50:44.226699  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:44.226706  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:44.226771  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:44.275323  172785 cri.go:89] found id: ""
	I0210 11:50:44.275359  172785 logs.go:282] 0 containers: []
	W0210 11:50:44.275384  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:44.275397  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:44.275412  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:44.358313  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:44.358365  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:44.376497  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:44.376546  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:44.460566  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:44.460596  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:44.460610  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:44.561007  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:44.561056  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:47.117049  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:47.130331  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:47.130411  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:47.168592  172785 cri.go:89] found id: ""
	I0210 11:50:47.168622  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.168634  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:47.168643  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:47.168713  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:47.212706  172785 cri.go:89] found id: ""
	I0210 11:50:47.212741  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.212752  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:47.212760  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:47.212827  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:47.251900  172785 cri.go:89] found id: ""
	I0210 11:50:47.251935  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.251948  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:47.251956  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:47.252011  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:47.298294  172785 cri.go:89] found id: ""
	I0210 11:50:47.298328  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.298344  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:47.298352  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:47.298412  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:47.336002  172785 cri.go:89] found id: ""
	I0210 11:50:47.336063  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.336079  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:47.336121  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:47.336194  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:47.368357  172785 cri.go:89] found id: ""
	I0210 11:50:47.368387  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.368395  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:47.368401  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:47.368471  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:47.410229  172785 cri.go:89] found id: ""
	I0210 11:50:47.410263  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.410274  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:47.410282  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:47.410346  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:47.451966  172785 cri.go:89] found id: ""
	I0210 11:50:47.451996  172785 logs.go:282] 0 containers: []
	W0210 11:50:47.452007  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:47.452019  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:47.452033  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:47.497915  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:47.497956  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:47.562058  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:47.562113  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:47.580247  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:47.580289  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:47.657878  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:47.657899  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:47.657914  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:50.235517  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:50.253210  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:50.253279  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:50.291334  172785 cri.go:89] found id: ""
	I0210 11:50:50.291368  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.291379  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:50.291386  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:50.291449  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:50.324275  172785 cri.go:89] found id: ""
	I0210 11:50:50.324320  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.324334  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:50.324342  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:50.324408  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:50.357234  172785 cri.go:89] found id: ""
	I0210 11:50:50.357265  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.357276  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:50.357283  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:50.357358  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:50.389376  172785 cri.go:89] found id: ""
	I0210 11:50:50.389412  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.389423  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:50.389431  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:50.389497  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:50.427489  172785 cri.go:89] found id: ""
	I0210 11:50:50.427518  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.427526  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:50.427532  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:50.427583  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:50.460169  172785 cri.go:89] found id: ""
	I0210 11:50:50.460196  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.460207  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:50.460217  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:50.460274  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:50.499428  172785 cri.go:89] found id: ""
	I0210 11:50:50.499465  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.499477  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:50.499485  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:50.499551  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:50.534896  172785 cri.go:89] found id: ""
	I0210 11:50:50.534929  172785 logs.go:282] 0 containers: []
	W0210 11:50:50.534941  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:50.534954  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:50.534968  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:50.594624  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:50.594659  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:50.607713  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:50.607745  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:50.673750  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:50.673773  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:50.673789  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:50.752516  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:50.752555  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:53.289669  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:53.307348  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:53.307425  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:53.349659  172785 cri.go:89] found id: ""
	I0210 11:50:53.349692  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.349712  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:53.349720  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:53.349791  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:53.383750  172785 cri.go:89] found id: ""
	I0210 11:50:53.383783  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.383794  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:53.383803  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:53.383869  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:53.418487  172785 cri.go:89] found id: ""
	I0210 11:50:53.418519  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.418531  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:53.418538  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:53.418646  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:53.454401  172785 cri.go:89] found id: ""
	I0210 11:50:53.454431  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.454443  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:53.454450  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:53.454517  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:53.487888  172785 cri.go:89] found id: ""
	I0210 11:50:53.487916  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.487926  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:53.487933  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:53.487998  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:53.521410  172785 cri.go:89] found id: ""
	I0210 11:50:53.521444  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.521456  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:53.521465  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:53.521527  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:53.567528  172785 cri.go:89] found id: ""
	I0210 11:50:53.567562  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.567571  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:53.567577  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:53.567639  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:53.606382  172785 cri.go:89] found id: ""
	I0210 11:50:53.606421  172785 logs.go:282] 0 containers: []
	W0210 11:50:53.606433  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:53.606445  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:53.606460  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:53.682992  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:53.683026  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:53.683041  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:53.782172  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:53.782219  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:53.837588  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:53.837627  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:53.896706  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:53.896750  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:56.411078  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:56.425837  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:56.425915  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:56.460620  172785 cri.go:89] found id: ""
	I0210 11:50:56.460644  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.460651  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:56.460657  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:56.460704  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:56.493320  172785 cri.go:89] found id: ""
	I0210 11:50:56.493345  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.493353  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:56.493359  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:56.493415  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:56.529836  172785 cri.go:89] found id: ""
	I0210 11:50:56.529866  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.529877  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:56.529886  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:56.529948  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:56.563516  172785 cri.go:89] found id: ""
	I0210 11:50:56.563550  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.563562  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:56.563571  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:56.563631  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:56.597883  172785 cri.go:89] found id: ""
	I0210 11:50:56.597909  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.597918  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:56.597925  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:56.597981  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:56.631578  172785 cri.go:89] found id: ""
	I0210 11:50:56.631600  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.631608  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:56.631614  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:56.631654  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:56.669129  172785 cri.go:89] found id: ""
	I0210 11:50:56.669155  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.669164  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:56.669172  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:56.669223  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:56.705056  172785 cri.go:89] found id: ""
	I0210 11:50:56.705085  172785 logs.go:282] 0 containers: []
	W0210 11:50:56.705097  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:56.705109  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:56.705132  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:56.755597  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:56.755621  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:56.771223  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:56.771248  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:56.834505  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:56.834533  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:56.834550  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:50:56.933608  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:56.933644  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:59.476662  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:50:59.496511  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:50:59.496573  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:50:59.546740  172785 cri.go:89] found id: ""
	I0210 11:50:59.546776  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.546787  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:50:59.546795  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:50:59.546863  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:50:59.601451  172785 cri.go:89] found id: ""
	I0210 11:50:59.601485  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.601496  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:50:59.601504  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:50:59.601570  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:50:59.639424  172785 cri.go:89] found id: ""
	I0210 11:50:59.639457  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.639470  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:50:59.639477  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:50:59.639544  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:50:59.669142  172785 cri.go:89] found id: ""
	I0210 11:50:59.669176  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.669187  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:50:59.669195  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:50:59.669257  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:50:59.699090  172785 cri.go:89] found id: ""
	I0210 11:50:59.699126  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.699139  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:50:59.699147  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:50:59.699223  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:50:59.729449  172785 cri.go:89] found id: ""
	I0210 11:50:59.729475  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.729486  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:50:59.729493  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:50:59.729560  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:50:59.766762  172785 cri.go:89] found id: ""
	I0210 11:50:59.766791  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.766804  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:50:59.766815  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:50:59.766872  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:50:59.803784  172785 cri.go:89] found id: ""
	I0210 11:50:59.803814  172785 logs.go:282] 0 containers: []
	W0210 11:50:59.803822  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:50:59.803831  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:50:59.803845  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:50:59.845064  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:50:59.845098  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:50:59.894127  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:50:59.894158  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:50:59.908603  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:50:59.908628  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:50:59.981631  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:50:59.981656  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:50:59.981674  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:02.559662  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:02.571874  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:02.571945  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:02.602194  172785 cri.go:89] found id: ""
	I0210 11:51:02.602223  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.602233  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:02.602241  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:02.602304  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:02.633510  172785 cri.go:89] found id: ""
	I0210 11:51:02.633537  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.633547  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:02.633557  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:02.633622  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:02.669730  172785 cri.go:89] found id: ""
	I0210 11:51:02.669764  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.669776  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:02.669784  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:02.669849  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:02.699759  172785 cri.go:89] found id: ""
	I0210 11:51:02.699826  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.699843  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:02.699853  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:02.699915  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:02.735317  172785 cri.go:89] found id: ""
	I0210 11:51:02.735346  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.735354  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:02.735360  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:02.735410  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:02.765670  172785 cri.go:89] found id: ""
	I0210 11:51:02.765697  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.765704  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:02.765710  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:02.765759  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:02.797404  172785 cri.go:89] found id: ""
	I0210 11:51:02.797435  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.797448  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:02.797456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:02.797515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:02.829414  172785 cri.go:89] found id: ""
	I0210 11:51:02.829448  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.829459  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:02.829471  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:02.829487  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:02.880066  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:02.880105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:02.893239  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:02.893274  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:02.971736  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:02.971766  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:02.971782  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:03.046928  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:03.046967  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.590932  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:05.604033  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:05.604091  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:05.640343  172785 cri.go:89] found id: ""
	I0210 11:51:05.640374  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.640383  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:05.640391  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:05.640441  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:05.676294  172785 cri.go:89] found id: ""
	I0210 11:51:05.676319  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.676326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:05.676331  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:05.676371  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:05.708986  172785 cri.go:89] found id: ""
	I0210 11:51:05.709016  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.709026  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:05.709034  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:05.709087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:05.741689  172785 cri.go:89] found id: ""
	I0210 11:51:05.741714  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.741722  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:05.741728  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:05.741769  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:05.774470  172785 cri.go:89] found id: ""
	I0210 11:51:05.774496  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.774506  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:05.774514  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:05.774571  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:05.806632  172785 cri.go:89] found id: ""
	I0210 11:51:05.806659  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.806669  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:05.806676  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:05.806725  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:05.849963  172785 cri.go:89] found id: ""
	I0210 11:51:05.849987  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.850001  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:05.850012  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:05.850068  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:05.888840  172785 cri.go:89] found id: ""
	I0210 11:51:05.888870  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.888880  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:05.888893  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:05.888907  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.930082  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:05.930105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:05.985122  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:05.985156  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:06.000022  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:06.000051  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:06.080268  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:06.080290  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:06.080305  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:08.668417  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:08.681333  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:08.681391  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:08.716394  172785 cri.go:89] found id: ""
	I0210 11:51:08.716427  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.716435  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:08.716442  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:08.716492  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:08.752135  172785 cri.go:89] found id: ""
	I0210 11:51:08.752161  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.752170  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:08.752175  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:08.752222  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:08.785404  172785 cri.go:89] found id: ""
	I0210 11:51:08.785430  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.785438  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:08.785443  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:08.785506  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:08.816938  172785 cri.go:89] found id: ""
	I0210 11:51:08.816965  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.816977  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:08.816986  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:08.817078  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:08.850791  172785 cri.go:89] found id: ""
	I0210 11:51:08.850827  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.850838  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:08.850847  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:08.850905  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:08.887566  172785 cri.go:89] found id: ""
	I0210 11:51:08.887602  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.887615  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:08.887623  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:08.887686  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:08.921347  172785 cri.go:89] found id: ""
	I0210 11:51:08.921389  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.921397  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:08.921404  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:08.921462  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:08.954704  172785 cri.go:89] found id: ""
	I0210 11:51:08.954738  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.954750  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:08.954762  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:08.954777  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:09.004897  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:09.004932  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:09.020413  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:09.020440  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:09.093835  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:09.093861  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:09.093874  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:09.174312  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:09.174355  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:11.710924  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:11.722908  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:11.722976  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:11.756702  172785 cri.go:89] found id: ""
	I0210 11:51:11.756744  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.756757  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:11.756765  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:11.756839  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:11.787281  172785 cri.go:89] found id: ""
	I0210 11:51:11.787315  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.787326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:11.787334  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:11.787407  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:11.817416  172785 cri.go:89] found id: ""
	I0210 11:51:11.817443  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.817451  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:11.817456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:11.817508  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:11.847209  172785 cri.go:89] found id: ""
	I0210 11:51:11.847241  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.847253  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:11.847260  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:11.847326  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:11.883365  172785 cri.go:89] found id: ""
	I0210 11:51:11.883395  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.883403  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:11.883408  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:11.883457  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:11.919812  172785 cri.go:89] found id: ""
	I0210 11:51:11.919840  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.919847  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:11.919854  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:11.919901  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:11.961310  172785 cri.go:89] found id: ""
	I0210 11:51:11.961348  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.961359  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:11.961366  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:11.961443  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:11.999667  172785 cri.go:89] found id: ""
	I0210 11:51:11.999701  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.999709  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:11.999718  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:11.999730  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:12.049284  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:12.049320  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:12.062044  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:12.062073  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:12.126307  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:12.126334  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:12.126351  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:12.215334  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:12.215382  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:14.752711  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:14.765091  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:14.765158  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:14.796318  172785 cri.go:89] found id: ""
	I0210 11:51:14.796352  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.796362  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:14.796371  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:14.796438  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:14.826452  172785 cri.go:89] found id: ""
	I0210 11:51:14.826484  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.826493  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:14.826501  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:14.826566  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:14.859861  172785 cri.go:89] found id: ""
	I0210 11:51:14.859890  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.859898  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:14.859904  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:14.859965  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:14.893708  172785 cri.go:89] found id: ""
	I0210 11:51:14.893740  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.893748  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:14.893755  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:14.893820  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:14.925870  172785 cri.go:89] found id: ""
	I0210 11:51:14.925897  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.925905  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:14.925911  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:14.925977  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:14.960528  172785 cri.go:89] found id: ""
	I0210 11:51:14.960554  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.960562  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:14.960567  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:14.960630  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:14.992831  172785 cri.go:89] found id: ""
	I0210 11:51:14.992859  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.992867  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:14.992874  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:14.992934  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:15.026146  172785 cri.go:89] found id: ""
	I0210 11:51:15.026182  172785 logs.go:282] 0 containers: []
	W0210 11:51:15.026193  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:15.026203  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:15.026217  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:15.074502  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:15.074537  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:15.087671  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:15.087713  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:15.152959  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:15.152984  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:15.153000  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:15.225042  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:15.225082  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:17.763634  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:17.776970  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:17.777038  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:17.810704  172785 cri.go:89] found id: ""
	I0210 11:51:17.810736  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.810747  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:17.810755  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:17.810814  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:17.845216  172785 cri.go:89] found id: ""
	I0210 11:51:17.845242  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.845251  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:17.845257  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:17.845316  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:17.877621  172785 cri.go:89] found id: ""
	I0210 11:51:17.877652  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.877668  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:17.877675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:17.877737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:17.908704  172785 cri.go:89] found id: ""
	I0210 11:51:17.908730  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.908739  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:17.908744  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:17.908792  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:17.943857  172785 cri.go:89] found id: ""
	I0210 11:51:17.943887  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.943896  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:17.943902  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:17.943952  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:17.974965  172785 cri.go:89] found id: ""
	I0210 11:51:17.974998  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.975010  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:17.975018  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:17.975085  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:18.006248  172785 cri.go:89] found id: ""
	I0210 11:51:18.006282  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.006292  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:18.006300  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:18.006360  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:18.036899  172785 cri.go:89] found id: ""
	I0210 11:51:18.036943  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.036954  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:18.036967  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:18.036982  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:18.049026  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:18.049054  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:18.111425  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:18.111452  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:18.111464  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:18.185158  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:18.185198  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:18.220425  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:18.220458  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:20.771952  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:20.784242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:20.784303  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:20.815676  172785 cri.go:89] found id: ""
	I0210 11:51:20.815702  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.815709  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:20.815715  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:20.815773  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:20.845540  172785 cri.go:89] found id: ""
	I0210 11:51:20.845573  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.845583  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:20.845592  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:20.845654  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:20.875046  172785 cri.go:89] found id: ""
	I0210 11:51:20.875077  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.875086  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:20.875092  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:20.875150  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:20.905636  172785 cri.go:89] found id: ""
	I0210 11:51:20.905662  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.905670  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:20.905675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:20.905722  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:20.935907  172785 cri.go:89] found id: ""
	I0210 11:51:20.935938  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.935948  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:20.935955  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:20.936028  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:20.965345  172785 cri.go:89] found id: ""
	I0210 11:51:20.965375  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.965386  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:20.965395  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:20.965464  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:20.995608  172785 cri.go:89] found id: ""
	I0210 11:51:20.995637  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.995646  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:20.995651  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:20.995712  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:21.025886  172785 cri.go:89] found id: ""
	I0210 11:51:21.025914  172785 logs.go:282] 0 containers: []
	W0210 11:51:21.025923  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:21.025932  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:21.025946  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:21.074578  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:21.074617  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:21.087795  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:21.087825  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:21.151479  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:21.151505  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:21.151520  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:21.228563  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:21.228613  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:23.769730  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:23.781806  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:23.781877  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:23.812884  172785 cri.go:89] found id: ""
	I0210 11:51:23.812912  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.812920  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:23.812926  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:23.812975  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:23.844665  172785 cri.go:89] found id: ""
	I0210 11:51:23.844700  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.844708  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:23.844713  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:23.844764  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:23.879613  172785 cri.go:89] found id: ""
	I0210 11:51:23.879642  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.879651  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:23.879657  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:23.879711  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:23.911425  172785 cri.go:89] found id: ""
	I0210 11:51:23.911452  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.911459  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:23.911465  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:23.911515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:23.944567  172785 cri.go:89] found id: ""
	I0210 11:51:23.944601  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.944610  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:23.944617  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:23.944669  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:23.974980  172785 cri.go:89] found id: ""
	I0210 11:51:23.975008  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.975016  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:23.975022  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:23.975074  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:24.006450  172785 cri.go:89] found id: ""
	I0210 11:51:24.006484  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.006492  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:24.006499  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:24.006563  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:24.037483  172785 cri.go:89] found id: ""
	I0210 11:51:24.037521  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.037533  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:24.037545  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:24.037560  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:24.049887  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:24.049921  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:24.117589  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:24.117615  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:24.117628  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:24.193737  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:24.193775  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:24.230256  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:24.230287  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:26.780045  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:26.792355  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:26.792446  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:26.826505  172785 cri.go:89] found id: ""
	I0210 11:51:26.826536  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.826544  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:26.826550  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:26.826601  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:26.865128  172785 cri.go:89] found id: ""
	I0210 11:51:26.865172  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.865185  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:26.865193  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:26.865259  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:26.897605  172785 cri.go:89] found id: ""
	I0210 11:51:26.897636  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.897644  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:26.897650  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:26.897699  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:26.930033  172785 cri.go:89] found id: ""
	I0210 11:51:26.930067  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.930079  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:26.930089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:26.930151  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:26.963458  172785 cri.go:89] found id: ""
	I0210 11:51:26.963497  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.963509  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:26.963519  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:26.963586  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:26.993022  172785 cri.go:89] found id: ""
	I0210 11:51:26.993051  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.993058  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:26.993065  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:26.993114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:27.029713  172785 cri.go:89] found id: ""
	I0210 11:51:27.029756  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.029768  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:27.029776  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:27.029838  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:27.065917  172785 cri.go:89] found id: ""
	I0210 11:51:27.065952  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.065962  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:27.065976  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:27.065988  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:27.127397  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:27.127435  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:27.140024  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:27.140055  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:27.218604  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:27.218625  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:27.218639  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:27.293606  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:27.293645  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:29.829516  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:29.841844  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:29.841926  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:29.877623  172785 cri.go:89] found id: ""
	I0210 11:51:29.877659  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.877671  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:29.877681  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:29.877755  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:29.917643  172785 cri.go:89] found id: ""
	I0210 11:51:29.917675  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.917687  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:29.917695  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:29.917761  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:29.963649  172785 cri.go:89] found id: ""
	I0210 11:51:29.963674  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.963682  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:29.963687  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:29.963737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:30.002084  172785 cri.go:89] found id: ""
	I0210 11:51:30.002113  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.002123  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:30.002131  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:30.002195  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:30.033435  172785 cri.go:89] found id: ""
	I0210 11:51:30.033462  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.033470  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:30.033476  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:30.033527  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:30.066494  172785 cri.go:89] found id: ""
	I0210 11:51:30.066531  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.066544  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:30.066553  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:30.066631  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:30.106190  172785 cri.go:89] found id: ""
	I0210 11:51:30.106224  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.106235  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:30.106242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:30.106307  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:30.138747  172785 cri.go:89] found id: ""
	I0210 11:51:30.138783  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.138794  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:30.138806  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:30.138821  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:30.186179  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:30.186214  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:30.239040  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:30.239098  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:30.251790  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:30.251833  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:30.331476  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:30.331510  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:30.331526  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:32.918871  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:32.932814  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:32.932871  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:32.968103  172785 cri.go:89] found id: ""
	I0210 11:51:32.968136  172785 logs.go:282] 0 containers: []
	W0210 11:51:32.968148  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:32.968155  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:32.968218  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:33.004341  172785 cri.go:89] found id: ""
	I0210 11:51:33.004373  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.004388  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:33.004395  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:33.004448  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:33.042028  172785 cri.go:89] found id: ""
	I0210 11:51:33.042063  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.042075  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:33.042083  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:33.042146  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:33.078050  172785 cri.go:89] found id: ""
	I0210 11:51:33.078075  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.078083  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:33.078089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:33.078138  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:33.114525  172785 cri.go:89] found id: ""
	I0210 11:51:33.114557  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.114566  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:33.114572  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:33.114642  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:33.149333  172785 cri.go:89] found id: ""
	I0210 11:51:33.149360  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.149368  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:33.149374  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:33.149442  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:33.180356  172785 cri.go:89] found id: ""
	I0210 11:51:33.180391  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.180399  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:33.180414  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:33.180466  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:33.216587  172785 cri.go:89] found id: ""
	I0210 11:51:33.216623  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.216634  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:33.216647  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:33.216663  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:33.249169  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:33.249202  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:33.298276  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:33.298313  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:33.310872  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:33.310898  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:33.383025  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:33.383053  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:33.383070  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:35.956363  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.968886  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:35.968960  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:36.000870  172785 cri.go:89] found id: ""
	I0210 11:51:36.000902  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.000911  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:36.000919  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:36.000969  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:36.034456  172785 cri.go:89] found id: ""
	I0210 11:51:36.034489  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.034501  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:36.034509  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:36.034573  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:36.076207  172785 cri.go:89] found id: ""
	I0210 11:51:36.076238  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.076250  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:36.076258  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:36.076323  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:36.123438  172785 cri.go:89] found id: ""
	I0210 11:51:36.123474  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.123485  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:36.123494  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:36.123561  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:36.157858  172785 cri.go:89] found id: ""
	I0210 11:51:36.157897  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.157909  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:36.157918  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:36.157986  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:36.195990  172785 cri.go:89] found id: ""
	I0210 11:51:36.196024  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.196035  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:36.196044  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:36.196110  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:36.229709  172785 cri.go:89] found id: ""
	I0210 11:51:36.229742  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.229754  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:36.229762  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:36.229828  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:36.263497  172785 cri.go:89] found id: ""
	I0210 11:51:36.263530  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.263544  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:36.263557  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:36.263575  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:36.323038  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:36.323075  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:36.339537  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:36.339565  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:36.415073  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:36.415103  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:36.415118  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:36.496333  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:36.496388  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:39.040991  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:39.053214  172785 kubeadm.go:597] duration metric: took 4m3.101491896s to restartPrimaryControlPlane
	W0210 11:51:39.053293  172785 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 11:51:39.053321  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:51:39.522357  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:51:39.540499  172785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:51:39.553326  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:51:39.562786  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:51:39.562803  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:51:39.562852  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:51:39.573017  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:51:39.573078  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:51:39.581851  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:51:39.590590  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:51:39.590645  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:51:39.599653  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.608323  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:51:39.608385  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.617777  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:51:39.626714  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:51:39.626776  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:51:39.636522  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:51:39.840090  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:53:36.111959  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:53:36.112102  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:53:36.113706  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:53:36.113753  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:53:36.113855  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:53:36.114008  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:53:36.114159  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:53:36.114222  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:53:36.115928  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:53:36.116009  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:53:36.116086  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:53:36.116175  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:53:36.116231  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:53:36.116289  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:53:36.116335  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:53:36.116393  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:53:36.116446  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:53:36.116518  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:53:36.116583  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:53:36.116616  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:53:36.116668  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:53:36.116711  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:53:36.116762  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:53:36.116827  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:53:36.116886  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:53:36.116997  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:53:36.117109  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:53:36.117153  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:53:36.117218  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:53:36.118466  172785 out.go:235]   - Booting up control plane ...
	I0210 11:53:36.118539  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:53:36.118608  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:53:36.118679  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:53:36.118787  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:53:36.118909  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:53:36.118953  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:53:36.119006  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119163  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119240  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119382  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119444  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119585  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119661  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119821  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119883  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.120101  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.120114  172785 kubeadm.go:310] 
	I0210 11:53:36.120147  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:53:36.120183  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:53:36.120193  172785 kubeadm.go:310] 
	I0210 11:53:36.120226  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:53:36.120255  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:53:36.120349  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:53:36.120362  172785 kubeadm.go:310] 
	I0210 11:53:36.120468  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:53:36.120512  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:53:36.120543  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:53:36.120549  172785 kubeadm.go:310] 
	I0210 11:53:36.120653  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:53:36.120728  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:53:36.120736  172785 kubeadm.go:310] 
	I0210 11:53:36.120858  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:53:36.120980  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:53:36.121098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:53:36.121214  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:53:36.121256  172785 kubeadm.go:310] 
	W0210 11:53:36.121387  172785 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 11:53:36.121446  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:53:41.570804  172785 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.449332067s)
	I0210 11:53:41.570881  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:53:41.583752  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:53:41.592553  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:53:41.592576  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:53:41.592626  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:53:41.600941  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:53:41.601000  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:53:41.609340  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:53:41.617464  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:53:41.617522  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:53:41.625988  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.633984  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:53:41.634044  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.642503  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:53:41.650425  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:53:41.650482  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:53:41.658856  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:53:41.860461  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:55:38.137554  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:55:38.137647  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:55:38.138863  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:55:38.138932  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:55:38.139057  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:55:38.139227  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:55:38.139319  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:55:38.139374  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:55:38.141121  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:55:38.141232  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:55:38.141287  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:55:38.141401  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:55:38.141504  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:55:38.141588  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:55:38.141677  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:55:38.141766  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:55:38.141863  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:55:38.141941  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:55:38.142049  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:55:38.142107  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:55:38.142188  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:55:38.142262  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:55:38.142343  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:55:38.142446  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:55:38.142524  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:55:38.142623  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:55:38.142733  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:55:38.142772  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:55:38.142847  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:55:38.144218  172785 out.go:235]   - Booting up control plane ...
	I0210 11:55:38.144323  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:55:38.144400  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:55:38.144457  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:55:38.144527  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:55:38.144671  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:55:38.144733  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:55:38.144843  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145077  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145155  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145321  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145403  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145599  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145696  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145874  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145956  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.146118  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.146130  172785 kubeadm.go:310] 
	I0210 11:55:38.146170  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:55:38.146213  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:55:38.146227  172785 kubeadm.go:310] 
	I0210 11:55:38.146286  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:55:38.146329  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:55:38.146481  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:55:38.146492  172785 kubeadm.go:310] 
	I0210 11:55:38.146597  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:55:38.146633  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:55:38.146662  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:55:38.146668  172785 kubeadm.go:310] 
	I0210 11:55:38.146752  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:55:38.146820  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:55:38.146830  172785 kubeadm.go:310] 
	I0210 11:55:38.146936  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:55:38.147020  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:55:38.147098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:55:38.147210  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:55:38.147271  172785 kubeadm.go:310] 
	I0210 11:55:38.147280  172785 kubeadm.go:394] duration metric: took 8m2.242182664s to StartCluster
	I0210 11:55:38.147337  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:55:38.147399  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:55:38.190552  172785 cri.go:89] found id: ""
	I0210 11:55:38.190585  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.190593  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:55:38.190601  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:55:38.190653  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:55:38.223994  172785 cri.go:89] found id: ""
	I0210 11:55:38.224030  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.224041  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:55:38.224050  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:55:38.224114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:55:38.254975  172785 cri.go:89] found id: ""
	I0210 11:55:38.255002  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.255013  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:55:38.255021  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:55:38.255087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:55:38.294383  172785 cri.go:89] found id: ""
	I0210 11:55:38.294412  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.294423  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:55:38.294431  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:55:38.294481  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:55:38.330915  172785 cri.go:89] found id: ""
	I0210 11:55:38.330943  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.330952  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:55:38.330958  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:55:38.331013  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:55:38.368811  172785 cri.go:89] found id: ""
	I0210 11:55:38.368841  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.368849  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:55:38.368856  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:55:38.368912  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:55:38.405782  172785 cri.go:89] found id: ""
	I0210 11:55:38.405809  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.405817  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:55:38.405822  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:55:38.405878  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:55:38.443286  172785 cri.go:89] found id: ""
	I0210 11:55:38.443313  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.443320  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:55:38.443331  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:55:38.443344  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:55:38.457513  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:55:38.457552  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:55:38.535390  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:55:38.535413  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:55:38.535425  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:55:38.644609  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:55:38.644644  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:55:38.708870  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:55:38.708900  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:55:38.771312  172785 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 11:55:38.771377  172785 out.go:270] * 
	* 
	W0210 11:55:38.771437  172785 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.771456  172785 out.go:270] * 
	* 
	W0210 11:55:38.772241  172785 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:55:38.775175  172785 out.go:201] 
	W0210 11:55:38.776401  172785 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.776449  172785 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 11:55:38.776467  172785 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 11:55:38.777818  172785 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-510006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (239.148484ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-510006 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-413450 image list                          | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| delete  | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| start   | -p newest-cni-188461 --memory=2200 --alsologtostderr   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-484935 image list                           | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| delete  | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| addons  | enable metrics-server -p newest-cni-188461             | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-448087                           | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-188461                  | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-188461 --memory=2200 --alsologtostderr   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-188461 image list                           | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	| delete  | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 11:51:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 11:51:05.820340  175432 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:51:05.820502  175432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:51:05.820516  175432 out.go:358] Setting ErrFile to fd 2...
	I0210 11:51:05.820523  175432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:51:05.820766  175432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:51:05.821523  175432 out.go:352] Setting JSON to false
	I0210 11:51:05.822831  175432 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9208,"bootTime":1739179058,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:51:05.822988  175432 start.go:139] virtualization: kvm guest
	I0210 11:51:05.825163  175432 out.go:177] * [newest-cni-188461] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:51:05.826457  175432 notify.go:220] Checking for updates...
	I0210 11:51:05.826494  175432 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:51:05.827767  175432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:51:05.828893  175432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:05.830154  175432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:51:05.831155  175432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:51:05.832181  175432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:51:05.833664  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:05.834109  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.834167  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.849261  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0210 11:51:05.849766  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.850430  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.850466  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.850929  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.851149  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.851442  175432 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:51:05.851738  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.851794  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.867715  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0210 11:51:05.868207  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.868793  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.868820  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.869239  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.869480  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.906409  175432 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 11:51:05.907615  175432 start.go:297] selected driver: kvm2
	I0210 11:51:05.907629  175432 start.go:901] validating driver "kvm2" against &{Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:05.907767  175432 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:51:05.908475  175432 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:51:05.908568  175432 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:51:05.924427  175432 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:51:05.924814  175432 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 11:51:05.924842  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:05.924873  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:05.924904  175432 start.go:340] cluster config:
	{Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:05.925004  175432 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:51:05.926563  175432 out.go:177] * Starting "newest-cni-188461" primary control-plane node in "newest-cni-188461" cluster
	I0210 11:51:05.927651  175432 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:51:05.927697  175432 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 11:51:05.927710  175432 cache.go:56] Caching tarball of preloaded images
	I0210 11:51:05.927792  175432 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 11:51:05.927808  175432 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 11:51:05.927910  175432 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/config.json ...
	I0210 11:51:05.928134  175432 start.go:360] acquireMachinesLock for newest-cni-188461: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:51:05.928183  175432 start.go:364] duration metric: took 27.306µs to acquireMachinesLock for "newest-cni-188461"
	I0210 11:51:05.928204  175432 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:51:05.928212  175432 fix.go:54] fixHost starting: 
	I0210 11:51:05.928550  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.928590  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.944316  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0210 11:51:05.944759  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.945287  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.945316  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.945647  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.945896  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.946092  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:05.947956  175432 fix.go:112] recreateIfNeeded on newest-cni-188461: state=Stopped err=<nil>
	I0210 11:51:05.948006  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	W0210 11:51:05.948163  175432 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:51:05.950073  175432 out.go:177] * Restarting existing kvm2 VM for "newest-cni-188461" ...
	I0210 11:51:02.699759  172785 cri.go:89] found id: ""
	I0210 11:51:02.699826  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.699843  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:02.699853  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:02.699915  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:02.735317  172785 cri.go:89] found id: ""
	I0210 11:51:02.735346  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.735354  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:02.735360  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:02.735410  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:02.765670  172785 cri.go:89] found id: ""
	I0210 11:51:02.765697  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.765704  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:02.765710  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:02.765759  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:02.797404  172785 cri.go:89] found id: ""
	I0210 11:51:02.797435  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.797448  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:02.797456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:02.797515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:02.829414  172785 cri.go:89] found id: ""
	I0210 11:51:02.829448  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.829459  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:02.829471  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:02.829487  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:02.880066  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:02.880105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:02.893239  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:02.893274  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:02.971736  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:02.971766  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:02.971782  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:03.046928  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:03.046967  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.590932  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:05.604033  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:05.604091  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:05.640343  172785 cri.go:89] found id: ""
	I0210 11:51:05.640374  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.640383  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:05.640391  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:05.640441  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:05.676294  172785 cri.go:89] found id: ""
	I0210 11:51:05.676319  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.676326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:05.676331  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:05.676371  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:05.708986  172785 cri.go:89] found id: ""
	I0210 11:51:05.709016  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.709026  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:05.709034  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:05.709087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:05.741689  172785 cri.go:89] found id: ""
	I0210 11:51:05.741714  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.741722  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:05.741728  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:05.741769  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:05.774470  172785 cri.go:89] found id: ""
	I0210 11:51:05.774496  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.774506  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:05.774514  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:05.774571  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:05.806632  172785 cri.go:89] found id: ""
	I0210 11:51:05.806659  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.806669  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:05.806676  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:05.806725  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:05.849963  172785 cri.go:89] found id: ""
	I0210 11:51:05.849987  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.850001  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:05.850012  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:05.850068  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:05.888840  172785 cri.go:89] found id: ""
	I0210 11:51:05.888870  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.888880  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:05.888893  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:05.888907  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.930082  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:05.930105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:05.985122  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:05.985156  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:06.000022  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:06.000051  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:06.080268  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:06.080290  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:06.080305  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:05.951396  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Start
	I0210 11:51:05.951587  175432 main.go:141] libmachine: (newest-cni-188461) starting domain...
	I0210 11:51:05.951605  175432 main.go:141] libmachine: (newest-cni-188461) ensuring networks are active...
	I0210 11:51:05.952431  175432 main.go:141] libmachine: (newest-cni-188461) Ensuring network default is active
	I0210 11:51:05.952804  175432 main.go:141] libmachine: (newest-cni-188461) Ensuring network mk-newest-cni-188461 is active
	I0210 11:51:05.953275  175432 main.go:141] libmachine: (newest-cni-188461) getting domain XML...
	I0210 11:51:05.954033  175432 main.go:141] libmachine: (newest-cni-188461) creating domain...
	I0210 11:51:07.158707  175432 main.go:141] libmachine: (newest-cni-188461) waiting for IP...
	I0210 11:51:07.159498  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.159846  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.159937  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.159839  175468 retry.go:31] will retry after 306.733597ms: waiting for domain to come up
	I0210 11:51:07.468485  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.468938  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.468960  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.468906  175468 retry.go:31] will retry after 340.921152ms: waiting for domain to come up
	I0210 11:51:07.811449  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.811899  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.811930  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.811856  175468 retry.go:31] will retry after 454.621787ms: waiting for domain to come up
	I0210 11:51:08.268622  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:08.269162  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:08.269193  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:08.269129  175468 retry.go:31] will retry after 544.066974ms: waiting for domain to come up
	I0210 11:51:08.815072  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:08.815779  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:08.815813  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:08.815728  175468 retry.go:31] will retry after 715.223482ms: waiting for domain to come up
	I0210 11:51:09.532634  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:09.533080  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:09.533105  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:09.533047  175468 retry.go:31] will retry after 919.550163ms: waiting for domain to come up
	I0210 11:51:10.453662  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:10.454148  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:10.454184  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:10.454112  175468 retry.go:31] will retry after 1.132151714s: waiting for domain to come up
	I0210 11:51:08.668417  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:08.681333  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:08.681391  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:08.716394  172785 cri.go:89] found id: ""
	I0210 11:51:08.716427  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.716435  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:08.716442  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:08.716492  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:08.752135  172785 cri.go:89] found id: ""
	I0210 11:51:08.752161  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.752170  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:08.752175  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:08.752222  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:08.785404  172785 cri.go:89] found id: ""
	I0210 11:51:08.785430  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.785438  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:08.785443  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:08.785506  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:08.816938  172785 cri.go:89] found id: ""
	I0210 11:51:08.816965  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.816977  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:08.816986  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:08.817078  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:08.850791  172785 cri.go:89] found id: ""
	I0210 11:51:08.850827  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.850838  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:08.850847  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:08.850905  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:08.887566  172785 cri.go:89] found id: ""
	I0210 11:51:08.887602  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.887615  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:08.887623  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:08.887686  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:08.921347  172785 cri.go:89] found id: ""
	I0210 11:51:08.921389  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.921397  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:08.921404  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:08.921462  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:08.954704  172785 cri.go:89] found id: ""
	I0210 11:51:08.954738  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.954750  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:08.954762  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:08.954777  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:09.004897  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:09.004932  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:09.020413  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:09.020440  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:09.093835  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:09.093861  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:09.093874  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:09.174312  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:09.174355  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:11.710924  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:11.722908  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:11.722976  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:11.756702  172785 cri.go:89] found id: ""
	I0210 11:51:11.756744  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.756757  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:11.756765  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:11.756839  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:11.787281  172785 cri.go:89] found id: ""
	I0210 11:51:11.787315  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.787326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:11.787334  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:11.787407  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:11.817416  172785 cri.go:89] found id: ""
	I0210 11:51:11.817443  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.817451  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:11.817456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:11.817508  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:11.847209  172785 cri.go:89] found id: ""
	I0210 11:51:11.847241  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.847253  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:11.847260  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:11.847326  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:11.883365  172785 cri.go:89] found id: ""
	I0210 11:51:11.883395  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.883403  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:11.883408  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:11.883457  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:11.919812  172785 cri.go:89] found id: ""
	I0210 11:51:11.919840  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.919847  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:11.919854  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:11.919901  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:11.961310  172785 cri.go:89] found id: ""
	I0210 11:51:11.961348  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.961359  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:11.961366  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:11.961443  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:11.999667  172785 cri.go:89] found id: ""
	I0210 11:51:11.999701  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.999709  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:11.999718  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:11.999730  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:12.049284  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:12.049320  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:12.062044  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:12.062073  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:12.126307  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:12.126334  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:12.126351  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:12.215334  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:12.215382  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:11.587837  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:11.588448  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:11.588474  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:11.588419  175468 retry.go:31] will retry after 1.04294927s: waiting for domain to come up
	I0210 11:51:12.632697  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:12.633143  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:12.633181  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:12.633127  175468 retry.go:31] will retry after 1.81651321s: waiting for domain to come up
	I0210 11:51:14.452121  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:14.452630  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:14.452696  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:14.452603  175468 retry.go:31] will retry after 2.010851888s: waiting for domain to come up
	I0210 11:51:14.752711  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:14.765091  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:14.765158  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:14.796318  172785 cri.go:89] found id: ""
	I0210 11:51:14.796352  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.796362  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:14.796371  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:14.796438  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:14.826452  172785 cri.go:89] found id: ""
	I0210 11:51:14.826484  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.826493  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:14.826501  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:14.826566  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:14.859861  172785 cri.go:89] found id: ""
	I0210 11:51:14.859890  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.859898  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:14.859904  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:14.859965  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:14.893708  172785 cri.go:89] found id: ""
	I0210 11:51:14.893740  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.893748  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:14.893755  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:14.893820  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:14.925870  172785 cri.go:89] found id: ""
	I0210 11:51:14.925897  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.925905  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:14.925911  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:14.925977  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:14.960528  172785 cri.go:89] found id: ""
	I0210 11:51:14.960554  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.960562  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:14.960567  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:14.960630  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:14.992831  172785 cri.go:89] found id: ""
	I0210 11:51:14.992859  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.992867  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:14.992874  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:14.992934  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:15.026146  172785 cri.go:89] found id: ""
	I0210 11:51:15.026182  172785 logs.go:282] 0 containers: []
	W0210 11:51:15.026193  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:15.026203  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:15.026217  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:15.074502  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:15.074537  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:15.087671  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:15.087713  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:15.152959  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:15.152984  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:15.153000  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:15.225042  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:15.225082  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:16.465454  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:16.465905  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:16.465953  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:16.465902  175468 retry.go:31] will retry after 2.06317351s: waiting for domain to come up
	I0210 11:51:18.530291  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:18.530745  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:18.530777  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:18.530719  175468 retry.go:31] will retry after 3.12374249s: waiting for domain to come up
	I0210 11:51:17.763634  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:17.776970  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:17.777038  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:17.810704  172785 cri.go:89] found id: ""
	I0210 11:51:17.810736  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.810747  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:17.810755  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:17.810814  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:17.845216  172785 cri.go:89] found id: ""
	I0210 11:51:17.845242  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.845251  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:17.845257  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:17.845316  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:17.877621  172785 cri.go:89] found id: ""
	I0210 11:51:17.877652  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.877668  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:17.877675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:17.877737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:17.908704  172785 cri.go:89] found id: ""
	I0210 11:51:17.908730  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.908739  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:17.908744  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:17.908792  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:17.943857  172785 cri.go:89] found id: ""
	I0210 11:51:17.943887  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.943896  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:17.943902  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:17.943952  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:17.974965  172785 cri.go:89] found id: ""
	I0210 11:51:17.974998  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.975010  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:17.975018  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:17.975085  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:18.006248  172785 cri.go:89] found id: ""
	I0210 11:51:18.006282  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.006292  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:18.006300  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:18.006360  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:18.036899  172785 cri.go:89] found id: ""
	I0210 11:51:18.036943  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.036954  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:18.036967  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:18.036982  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:18.049026  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:18.049054  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:18.111425  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:18.111452  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:18.111464  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:18.185158  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:18.185198  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:18.220425  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:18.220458  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:20.771952  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:20.784242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:20.784303  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:20.815676  172785 cri.go:89] found id: ""
	I0210 11:51:20.815702  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.815709  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:20.815715  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:20.815773  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:20.845540  172785 cri.go:89] found id: ""
	I0210 11:51:20.845573  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.845583  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:20.845592  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:20.845654  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:20.875046  172785 cri.go:89] found id: ""
	I0210 11:51:20.875077  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.875086  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:20.875092  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:20.875150  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:20.905636  172785 cri.go:89] found id: ""
	I0210 11:51:20.905662  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.905670  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:20.905675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:20.905722  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:20.935907  172785 cri.go:89] found id: ""
	I0210 11:51:20.935938  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.935948  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:20.935955  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:20.936028  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:20.965345  172785 cri.go:89] found id: ""
	I0210 11:51:20.965375  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.965386  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:20.965395  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:20.965464  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:20.995608  172785 cri.go:89] found id: ""
	I0210 11:51:20.995637  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.995646  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:20.995651  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:20.995712  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:21.025886  172785 cri.go:89] found id: ""
	I0210 11:51:21.025914  172785 logs.go:282] 0 containers: []
	W0210 11:51:21.025923  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:21.025932  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:21.025946  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:21.074578  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:21.074617  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:21.087795  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:21.087825  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:21.151479  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:21.151505  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:21.151520  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:21.228563  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:21.228613  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:21.655587  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:21.656261  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:21.656284  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:21.655989  175468 retry.go:31] will retry after 4.241425857s: waiting for domain to come up
	I0210 11:51:23.769730  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:23.781806  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:23.781877  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:23.812884  172785 cri.go:89] found id: ""
	I0210 11:51:23.812912  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.812920  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:23.812926  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:23.812975  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:23.844665  172785 cri.go:89] found id: ""
	I0210 11:51:23.844700  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.844708  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:23.844713  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:23.844764  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:23.879613  172785 cri.go:89] found id: ""
	I0210 11:51:23.879642  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.879651  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:23.879657  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:23.879711  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:23.911425  172785 cri.go:89] found id: ""
	I0210 11:51:23.911452  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.911459  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:23.911465  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:23.911515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:23.944567  172785 cri.go:89] found id: ""
	I0210 11:51:23.944601  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.944610  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:23.944617  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:23.944669  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:23.974980  172785 cri.go:89] found id: ""
	I0210 11:51:23.975008  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.975016  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:23.975022  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:23.975074  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:24.006450  172785 cri.go:89] found id: ""
	I0210 11:51:24.006484  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.006492  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:24.006499  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:24.006563  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:24.037483  172785 cri.go:89] found id: ""
	I0210 11:51:24.037521  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.037533  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:24.037545  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:24.037560  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:24.049887  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:24.049921  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:24.117589  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:24.117615  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:24.117628  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:24.193737  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:24.193775  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:24.230256  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:24.230287  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:26.780045  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:26.792355  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:26.792446  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:26.826505  172785 cri.go:89] found id: ""
	I0210 11:51:26.826536  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.826544  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:26.826550  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:26.826601  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:26.865128  172785 cri.go:89] found id: ""
	I0210 11:51:26.865172  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.865185  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:26.865193  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:26.865259  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:26.897605  172785 cri.go:89] found id: ""
	I0210 11:51:26.897636  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.897644  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:26.897650  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:26.897699  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:26.930033  172785 cri.go:89] found id: ""
	I0210 11:51:26.930067  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.930079  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:26.930089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:26.930151  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:26.963458  172785 cri.go:89] found id: ""
	I0210 11:51:26.963497  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.963509  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:26.963519  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:26.963586  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:26.993022  172785 cri.go:89] found id: ""
	I0210 11:51:26.993051  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.993058  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:26.993065  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:26.993114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:27.029713  172785 cri.go:89] found id: ""
	I0210 11:51:27.029756  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.029768  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:27.029776  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:27.029838  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:27.065917  172785 cri.go:89] found id: ""
	I0210 11:51:27.065952  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.065962  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:27.065976  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:27.065988  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:27.127397  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:27.127435  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:27.140024  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:27.140055  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:27.218604  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:27.218625  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:27.218639  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:27.293606  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:27.293645  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:25.902358  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.902836  175432 main.go:141] libmachine: (newest-cni-188461) found domain IP: 192.168.39.24
	I0210 11:51:25.902861  175432 main.go:141] libmachine: (newest-cni-188461) reserving static IP address...
	I0210 11:51:25.902877  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has current primary IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.903373  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "newest-cni-188461", mac: "52:54:00:25:fb:1e", ip: "192.168.39.24"} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:25.903414  175432 main.go:141] libmachine: (newest-cni-188461) DBG | skip adding static IP to network mk-newest-cni-188461 - found existing host DHCP lease matching {name: "newest-cni-188461", mac: "52:54:00:25:fb:1e", ip: "192.168.39.24"}
	I0210 11:51:25.903432  175432 main.go:141] libmachine: (newest-cni-188461) reserved static IP address 192.168.39.24 for domain newest-cni-188461
	I0210 11:51:25.903450  175432 main.go:141] libmachine: (newest-cni-188461) waiting for SSH...
	I0210 11:51:25.903464  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Getting to WaitForSSH function...
	I0210 11:51:25.905574  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.905915  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:25.905949  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.906037  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Using SSH client type: external
	I0210 11:51:25.906082  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa (-rw-------)
	I0210 11:51:25.906117  175432 main.go:141] libmachine: (newest-cni-188461) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:51:25.906133  175432 main.go:141] libmachine: (newest-cni-188461) DBG | About to run SSH command:
	I0210 11:51:25.906142  175432 main.go:141] libmachine: (newest-cni-188461) DBG | exit 0
	I0210 11:51:26.026989  175432 main.go:141] libmachine: (newest-cni-188461) DBG | SSH cmd err, output: <nil>: 
	I0210 11:51:26.027395  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetConfigRaw
	I0210 11:51:26.028030  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:26.030814  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.031285  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.031323  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.031552  175432 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/config.json ...
	I0210 11:51:26.031826  175432 machine.go:93] provisionDockerMachine start ...
	I0210 11:51:26.031852  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:26.032077  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.034420  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.034744  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.034774  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.034906  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.035078  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.035233  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.035365  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.035514  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.035757  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.035775  175432 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:51:26.135247  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:51:26.135280  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.135565  175432 buildroot.go:166] provisioning hostname "newest-cni-188461"
	I0210 11:51:26.135601  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.135800  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.138386  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.138722  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.138760  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.138918  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.139103  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.139257  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.139396  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.139525  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.139740  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.139760  175432 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-188461 && echo "newest-cni-188461" | sudo tee /etc/hostname
	I0210 11:51:26.252653  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-188461
	
	I0210 11:51:26.252681  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.255333  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.255649  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.255683  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.255832  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.256043  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.256209  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.256316  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.256451  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.256607  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.256621  175432 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-188461' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-188461/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-188461' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:51:26.367365  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:51:26.367412  175432 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:51:26.367489  175432 buildroot.go:174] setting up certificates
	I0210 11:51:26.367512  175432 provision.go:84] configureAuth start
	I0210 11:51:26.367534  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.367839  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:26.370685  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.371061  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.371093  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.371229  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.373420  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.373836  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.373880  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.373983  175432 provision.go:143] copyHostCerts
	I0210 11:51:26.374051  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:51:26.374065  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:51:26.374133  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:51:26.374276  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:51:26.374287  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:51:26.374313  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:51:26.374367  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:51:26.374375  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:51:26.374397  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:51:26.374449  175432 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.newest-cni-188461 san=[127.0.0.1 192.168.39.24 localhost minikube newest-cni-188461]
	I0210 11:51:26.560219  175432 provision.go:177] copyRemoteCerts
	I0210 11:51:26.560295  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:51:26.560322  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.562789  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.563081  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.563110  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.563305  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.563539  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.563695  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.563849  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:26.644785  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:51:26.666689  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0210 11:51:26.688226  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:51:26.709285  175432 provision.go:87] duration metric: took 341.756699ms to configureAuth
	I0210 11:51:26.709309  175432 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:51:26.709474  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:26.709553  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.712093  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.712454  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.712485  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.712651  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.712862  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.713012  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.713160  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.713286  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.713469  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.713490  175432 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:51:26.936519  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:51:26.936549  175432 machine.go:96] duration metric: took 904.704645ms to provisionDockerMachine
	I0210 11:51:26.936563  175432 start.go:293] postStartSetup for "newest-cni-188461" (driver="kvm2")
	I0210 11:51:26.936577  175432 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:51:26.936604  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:26.936940  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:51:26.936977  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.939826  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.940192  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.940237  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.940341  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.940583  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.940763  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.940960  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.026462  175432 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:51:27.031688  175432 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:51:27.031709  175432 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:51:27.031773  175432 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:51:27.031842  175432 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:51:27.031934  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:51:27.044721  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:51:27.074068  175432 start.go:296] duration metric: took 137.488029ms for postStartSetup
	I0210 11:51:27.074125  175432 fix.go:56] duration metric: took 21.145913922s for fixHost
	I0210 11:51:27.074147  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.077156  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.077642  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.077674  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.077899  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.078079  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.078248  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.078349  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.078477  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:27.078645  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:27.078655  175432 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:51:27.189002  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739188287.148629499
	
	I0210 11:51:27.189035  175432 fix.go:216] guest clock: 1739188287.148629499
	I0210 11:51:27.189046  175432 fix.go:229] Guest: 2025-02-10 11:51:27.148629499 +0000 UTC Remote: 2025-02-10 11:51:27.074130149 +0000 UTC m=+21.295255642 (delta=74.49935ms)
	I0210 11:51:27.189075  175432 fix.go:200] guest clock delta is within tolerance: 74.49935ms
	I0210 11:51:27.189098  175432 start.go:83] releasing machines lock for "newest-cni-188461", held for 21.260901149s
	I0210 11:51:27.189149  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.189435  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:27.192197  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.192662  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.192691  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.192835  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193427  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193607  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193731  175432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:51:27.193784  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.193815  175432 ssh_runner.go:195] Run: cat /version.json
	I0210 11:51:27.193843  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.196421  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.196581  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.196952  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.196982  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.197011  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.197027  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.197119  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.197229  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.197348  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.197432  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.197512  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.197578  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.197673  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.197762  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.309501  175432 ssh_runner.go:195] Run: systemctl --version
	I0210 11:51:27.315451  175432 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:51:27.461369  175432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:51:27.467018  175432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:51:27.467094  175432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:51:27.482133  175432 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:51:27.482163  175432 start.go:495] detecting cgroup driver to use...
	I0210 11:51:27.482234  175432 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:51:27.497192  175432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:51:27.510105  175432 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:51:27.510161  175432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:51:27.523916  175432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:51:27.537043  175432 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:51:27.652244  175432 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:51:27.798511  175432 docker.go:233] disabling docker service ...
	I0210 11:51:27.798592  175432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:51:27.812301  175432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:51:27.824217  175432 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:51:27.953601  175432 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:51:28.082863  175432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:51:28.095446  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:51:28.111945  175432 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 11:51:28.112013  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.121412  175432 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:51:28.121479  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.130512  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.139646  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.148613  175432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:51:28.157806  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.166775  175432 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.181698  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.190623  175432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:51:28.198803  175432 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:51:28.198866  175432 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:51:28.210820  175432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:51:28.219005  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:28.334861  175432 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:51:28.416349  175432 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:51:28.416439  175432 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:51:28.421694  175432 start.go:563] Will wait 60s for crictl version
	I0210 11:51:28.421766  175432 ssh_runner.go:195] Run: which crictl
	I0210 11:51:28.425209  175432 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:51:28.469947  175432 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:51:28.470045  175432 ssh_runner.go:195] Run: crio --version
	I0210 11:51:28.501926  175432 ssh_runner.go:195] Run: crio --version
	I0210 11:51:28.529983  175432 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 11:51:28.531238  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:28.534202  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:28.534482  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:28.534503  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:28.534753  175432 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 11:51:28.538726  175432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:51:28.552133  175432 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0210 11:51:28.553249  175432 kubeadm.go:883] updating cluster {Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:51:28.553380  175432 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:51:28.553432  175432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:51:28.586300  175432 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 11:51:28.586363  175432 ssh_runner.go:195] Run: which lz4
	I0210 11:51:28.589827  175432 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:51:28.593533  175432 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:51:28.593560  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 11:51:29.799950  175432 crio.go:462] duration metric: took 1.21014347s to copy over tarball
	I0210 11:51:29.800045  175432 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:51:29.829516  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:29.841844  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:29.841926  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:29.877623  172785 cri.go:89] found id: ""
	I0210 11:51:29.877659  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.877671  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:29.877681  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:29.877755  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:29.917643  172785 cri.go:89] found id: ""
	I0210 11:51:29.917675  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.917687  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:29.917695  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:29.917761  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:29.963649  172785 cri.go:89] found id: ""
	I0210 11:51:29.963674  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.963682  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:29.963687  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:29.963737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:30.002084  172785 cri.go:89] found id: ""
	I0210 11:51:30.002113  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.002123  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:30.002131  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:30.002195  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:30.033435  172785 cri.go:89] found id: ""
	I0210 11:51:30.033462  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.033470  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:30.033476  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:30.033527  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:30.066494  172785 cri.go:89] found id: ""
	I0210 11:51:30.066531  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.066544  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:30.066553  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:30.066631  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:30.106190  172785 cri.go:89] found id: ""
	I0210 11:51:30.106224  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.106235  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:30.106242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:30.106307  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:30.138747  172785 cri.go:89] found id: ""
	I0210 11:51:30.138783  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.138794  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:30.138806  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:30.138821  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:30.186179  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:30.186214  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:30.239040  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:30.239098  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:30.251790  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:30.251833  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:30.331476  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:30.331510  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:30.331526  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:31.868684  175432 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068598843s)
	I0210 11:51:31.868722  175432 crio.go:469] duration metric: took 2.068733654s to extract the tarball
	I0210 11:51:31.868734  175432 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:51:31.905043  175432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:51:31.949467  175432 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 11:51:31.949495  175432 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:51:31.949506  175432 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.32.1 crio true true} ...
	I0210 11:51:31.949635  175432 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-188461 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:51:31.949725  175432 ssh_runner.go:195] Run: crio config
	I0210 11:51:31.995118  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:31.995138  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:31.995148  175432 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0210 11:51:31.995171  175432 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-188461 NodeName:newest-cni-188461 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:51:31.995327  175432 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-188461"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:51:31.995401  175432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:51:32.004538  175432 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:51:32.004595  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:51:32.013199  175432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0210 11:51:32.028077  175432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:51:32.042573  175432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0210 11:51:32.058002  175432 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0210 11:51:32.061432  175432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:51:32.072627  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:32.186846  175432 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:51:32.202515  175432 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461 for IP: 192.168.39.24
	I0210 11:51:32.202534  175432 certs.go:194] generating shared ca certs ...
	I0210 11:51:32.202551  175432 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:32.202707  175432 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:51:32.202751  175432 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:51:32.202760  175432 certs.go:256] generating profile certs ...
	I0210 11:51:32.202851  175432 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/client.key
	I0210 11:51:32.202927  175432 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.key.972ab71d
	I0210 11:51:32.202971  175432 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.key
	I0210 11:51:32.203107  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:51:32.203160  175432 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:51:32.203176  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:51:32.203230  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:51:32.203260  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:51:32.203292  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:51:32.203349  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:51:32.203967  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:51:32.237448  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:51:32.265671  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:51:32.300282  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:51:32.321803  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 11:51:32.356159  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:51:32.384387  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:51:32.405311  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:51:32.426731  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:51:32.447878  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:51:32.468769  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:51:32.489529  175432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:51:32.504167  175432 ssh_runner.go:195] Run: openssl version
	I0210 11:51:32.509508  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:51:32.518871  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.522876  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.522932  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.528142  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:51:32.537270  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:51:32.546522  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.550499  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.550547  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.555659  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:51:32.564881  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:51:32.574099  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.578092  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.578136  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.583164  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:51:32.592213  175432 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:51:32.596194  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 11:51:32.601754  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 11:51:32.607136  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 11:51:32.612639  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 11:51:32.617866  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 11:51:32.623168  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 11:51:32.628580  175432 kubeadm.go:392] StartCluster: {Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:32.628663  175432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:51:32.628718  175432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:51:32.662324  175432 cri.go:89] found id: ""
	I0210 11:51:32.662406  175432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:51:32.671458  175432 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 11:51:32.671474  175432 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 11:51:32.671515  175432 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 11:51:32.680246  175432 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:51:32.680805  175432 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-188461" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:32.681030  175432 kubeconfig.go:62] /home/jenkins/minikube-integration/20385-109271/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-188461" cluster setting kubeconfig missing "newest-cni-188461" context setting]
	I0210 11:51:32.681433  175432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:32.682590  175432 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 11:51:32.690876  175432 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.24
	I0210 11:51:32.690920  175432 kubeadm.go:1160] stopping kube-system containers ...
	I0210 11:51:32.690932  175432 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 11:51:32.690971  175432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:51:32.722678  175432 cri.go:89] found id: ""
	I0210 11:51:32.722734  175432 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 11:51:32.737166  175432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:51:32.745716  175432 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:51:32.745735  175432 kubeadm.go:157] found existing configuration files:
	
	I0210 11:51:32.745774  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:51:32.753706  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:51:32.753748  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:51:32.761921  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:51:32.769684  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:51:32.769733  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:51:32.778027  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:51:32.785678  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:51:32.785720  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:51:32.793869  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:51:32.801704  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:51:32.801745  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:51:32.809777  175432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:51:32.817865  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:32.922655  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:33.799309  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.003678  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.061490  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.141205  175432 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:51:34.141278  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:34.641870  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.142005  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.641428  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:32.918871  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:32.932814  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:32.932871  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:32.968103  172785 cri.go:89] found id: ""
	I0210 11:51:32.968136  172785 logs.go:282] 0 containers: []
	W0210 11:51:32.968148  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:32.968155  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:32.968218  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:33.004341  172785 cri.go:89] found id: ""
	I0210 11:51:33.004373  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.004388  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:33.004395  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:33.004448  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:33.042028  172785 cri.go:89] found id: ""
	I0210 11:51:33.042063  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.042075  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:33.042083  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:33.042146  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:33.078050  172785 cri.go:89] found id: ""
	I0210 11:51:33.078075  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.078083  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:33.078089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:33.078138  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:33.114525  172785 cri.go:89] found id: ""
	I0210 11:51:33.114557  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.114566  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:33.114572  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:33.114642  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:33.149333  172785 cri.go:89] found id: ""
	I0210 11:51:33.149360  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.149368  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:33.149374  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:33.149442  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:33.180356  172785 cri.go:89] found id: ""
	I0210 11:51:33.180391  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.180399  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:33.180414  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:33.180466  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:33.216587  172785 cri.go:89] found id: ""
	I0210 11:51:33.216623  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.216634  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:33.216647  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:33.216663  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:33.249169  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:33.249202  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:33.298276  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:33.298313  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:33.310872  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:33.310898  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:33.383025  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:33.383053  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:33.383070  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:35.956363  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.968886  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:35.968960  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:36.000870  172785 cri.go:89] found id: ""
	I0210 11:51:36.000902  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.000911  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:36.000919  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:36.000969  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:36.034456  172785 cri.go:89] found id: ""
	I0210 11:51:36.034489  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.034501  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:36.034509  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:36.034573  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:36.076207  172785 cri.go:89] found id: ""
	I0210 11:51:36.076238  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.076250  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:36.076258  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:36.076323  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:36.123438  172785 cri.go:89] found id: ""
	I0210 11:51:36.123474  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.123485  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:36.123494  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:36.123561  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:36.157858  172785 cri.go:89] found id: ""
	I0210 11:51:36.157897  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.157909  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:36.157918  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:36.157986  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:36.195990  172785 cri.go:89] found id: ""
	I0210 11:51:36.196024  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.196035  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:36.196044  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:36.196110  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:36.229709  172785 cri.go:89] found id: ""
	I0210 11:51:36.229742  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.229754  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:36.229762  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:36.229828  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:36.263497  172785 cri.go:89] found id: ""
	I0210 11:51:36.263530  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.263544  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:36.263557  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:36.263575  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:36.323038  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:36.323075  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:36.339537  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:36.339565  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:36.415073  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:36.415103  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:36.415118  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:36.496333  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:36.496388  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:36.142283  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:36.642276  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:36.656745  175432 api_server.go:72] duration metric: took 2.515536249s to wait for apiserver process to appear ...
	I0210 11:51:36.656777  175432 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:51:36.656802  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.394390  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:51:39.394421  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:51:39.394436  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.437828  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:51:39.437873  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:51:39.657293  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.664873  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:51:39.664898  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:51:40.157233  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:40.162450  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:51:40.162480  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:51:40.657079  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:40.662355  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0210 11:51:40.672632  175432 api_server.go:141] control plane version: v1.32.1
	I0210 11:51:40.672663  175432 api_server.go:131] duration metric: took 4.015877097s to wait for apiserver health ...
	I0210 11:51:40.672674  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:40.672682  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:40.674230  175432 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 11:51:40.675515  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 11:51:40.714574  175432 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 11:51:40.761839  175432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:51:40.766154  175432 system_pods.go:59] 8 kube-system pods found
	I0210 11:51:40.766198  175432 system_pods.go:61] "coredns-668d6bf9bc-s8bdj" [b89cbee2-a27d-4c8e-950c-b9bb794dca2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 11:51:40.766211  175432 system_pods.go:61] "etcd-newest-cni-188461" [d3f5135e-dc27-4326-8b51-9273547f4ead] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 11:51:40.766222  175432 system_pods.go:61] "kube-apiserver-newest-cni-188461" [b2b151b6-34c2-45f9-b052-4978e1d4c4e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 11:51:40.766233  175432 system_pods.go:61] "kube-controller-manager-newest-cni-188461" [7c5ff0ac-2dd6-4de0-8533-de9235d7ecee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 11:51:40.766246  175432 system_pods.go:61] "kube-proxy-hnd7c" [211dd9a1-4677-4b30-a805-8c44aa78929a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 11:51:40.766259  175432 system_pods.go:61] "kube-scheduler-newest-cni-188461" [65a9946b-d333-4dca-8047-6243b2233902] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 11:51:40.766269  175432 system_pods.go:61] "metrics-server-f79f97bbb-bfqgl" [994d3cd1-03a9-4bc6-9d1f-726efac9bf56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 11:51:40.766285  175432 system_pods.go:61] "storage-provisioner" [ae729534-6a0a-45a8-82ab-cfcb49ba06a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:51:40.766295  175432 system_pods.go:74] duration metric: took 4.431457ms to wait for pod list to return data ...
	I0210 11:51:40.766308  175432 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:51:40.769411  175432 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:51:40.769438  175432 node_conditions.go:123] node cpu capacity is 2
	I0210 11:51:40.769451  175432 node_conditions.go:105] duration metric: took 3.132289ms to run NodePressure ...
	I0210 11:51:40.769473  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:41.086960  175432 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 11:51:41.098932  175432 ops.go:34] apiserver oom_adj: -16
	I0210 11:51:41.098960  175432 kubeadm.go:597] duration metric: took 8.427477491s to restartPrimaryControlPlane
	I0210 11:51:41.098972  175432 kubeadm.go:394] duration metric: took 8.470418783s to StartCluster
	I0210 11:51:41.098996  175432 settings.go:142] acquiring lock: {Name:mk1369a4cca9eaf53282144d4cb555c048db8e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:41.099098  175432 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:41.100320  175432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:41.100593  175432 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:51:41.100701  175432 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 11:51:41.100794  175432 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-188461"
	I0210 11:51:41.100803  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:41.100819  175432 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-188461"
	W0210 11:51:41.100827  175432 addons.go:247] addon storage-provisioner should already be in state true
	I0210 11:51:41.100817  175432 addons.go:69] Setting default-storageclass=true in profile "newest-cni-188461"
	I0210 11:51:41.100822  175432 addons.go:69] Setting metrics-server=true in profile "newest-cni-188461"
	I0210 11:51:41.100850  175432 addons.go:69] Setting dashboard=true in profile "newest-cni-188461"
	I0210 11:51:41.100852  175432 addons.go:238] Setting addon metrics-server=true in "newest-cni-188461"
	I0210 11:51:41.100860  175432 addons.go:238] Setting addon dashboard=true in "newest-cni-188461"
	I0210 11:51:41.100862  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.100863  175432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-188461"
	W0210 11:51:41.100868  175432 addons.go:247] addon dashboard should already be in state true
	W0210 11:51:41.100872  175432 addons.go:247] addon metrics-server should already be in state true
	I0210 11:51:41.100896  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.100896  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.101280  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101284  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101284  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101297  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101304  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101306  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101317  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101331  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.102551  175432 out.go:177] * Verifying Kubernetes components...
	I0210 11:51:41.104005  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:41.126954  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0210 11:51:41.126969  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34267
	I0210 11:51:41.126987  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0210 11:51:41.126957  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0210 11:51:41.127478  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.127629  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.127758  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.128041  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.128116  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128132  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128297  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128317  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128356  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128380  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128772  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128775  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128814  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128869  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128889  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.129376  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.129425  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.129664  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.129977  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.130022  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.130061  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.130084  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.130105  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.133045  175432 addons.go:238] Setting addon default-storageclass=true in "newest-cni-188461"
	W0210 11:51:41.133067  175432 addons.go:247] addon default-storageclass should already be in state true
	I0210 11:51:41.133099  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.133468  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.133505  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.151283  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0210 11:51:41.151844  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.152503  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.152516  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.152878  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.153060  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.154241  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0210 11:51:41.155099  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.155177  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.155659  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.155682  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.156073  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.156257  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.157422  175432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:51:41.157807  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.158807  175432 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:51:41.158829  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 11:51:41.158847  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.159480  175432 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 11:51:41.160731  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 11:51:41.160754  175432 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 11:51:41.160771  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.164823  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.165475  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.165588  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.165840  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.166026  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.166161  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.166279  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.166561  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.166895  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.166944  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.167071  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.167255  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37463
	I0210 11:51:41.167365  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.167586  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.167759  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.167785  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.168584  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.168608  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.168951  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.169176  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.170787  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0210 11:51:41.170957  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.171371  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.171901  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.171922  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.172307  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.172722  175432 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 11:51:41.172993  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.173038  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.174922  175432 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 11:51:39.040991  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:39.053214  172785 kubeadm.go:597] duration metric: took 4m3.101491896s to restartPrimaryControlPlane
	W0210 11:51:39.053293  172785 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 11:51:39.053321  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:51:39.522357  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:51:39.540499  172785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:51:39.553326  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:51:39.562786  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:51:39.562803  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:51:39.562852  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:51:39.573017  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:51:39.573078  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:51:39.581851  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:51:39.590590  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:51:39.590645  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:51:39.599653  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.608323  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:51:39.608385  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.617777  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:51:39.626714  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:51:39.626776  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:51:39.636522  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:51:39.840090  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:51:41.176022  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 11:51:41.176045  175432 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 11:51:41.176065  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.179317  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.179726  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.179749  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.179976  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.180142  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.180281  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.180389  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.191261  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0210 11:51:41.191669  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.192145  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.192168  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.192536  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.192736  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.194288  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.194490  175432 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 11:51:41.194509  175432 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 11:51:41.194523  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.197218  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.197921  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.197930  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.197948  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.198076  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.198218  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.198446  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.369336  175432 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:51:41.409927  175432 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:51:41.410008  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:41.469358  175432 api_server.go:72] duration metric: took 368.71941ms to wait for apiserver process to appear ...
	I0210 11:51:41.469394  175432 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:51:41.469421  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:41.478932  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0210 11:51:41.479821  175432 api_server.go:141] control plane version: v1.32.1
	I0210 11:51:41.479842  175432 api_server.go:131] duration metric: took 10.440148ms to wait for apiserver health ...
	I0210 11:51:41.479849  175432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:51:41.483318  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:51:41.492142  175432 system_pods.go:59] 8 kube-system pods found
	I0210 11:51:41.492175  175432 system_pods.go:61] "coredns-668d6bf9bc-s8bdj" [b89cbee2-a27d-4c8e-950c-b9bb794dca2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 11:51:41.492186  175432 system_pods.go:61] "etcd-newest-cni-188461" [d3f5135e-dc27-4326-8b51-9273547f4ead] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 11:51:41.492198  175432 system_pods.go:61] "kube-apiserver-newest-cni-188461" [b2b151b6-34c2-45f9-b052-4978e1d4c4e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 11:51:41.492205  175432 system_pods.go:61] "kube-controller-manager-newest-cni-188461" [7c5ff0ac-2dd6-4de0-8533-de9235d7ecee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 11:51:41.492211  175432 system_pods.go:61] "kube-proxy-hnd7c" [211dd9a1-4677-4b30-a805-8c44aa78929a] Running
	I0210 11:51:41.492217  175432 system_pods.go:61] "kube-scheduler-newest-cni-188461" [65a9946b-d333-4dca-8047-6243b2233902] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 11:51:41.492225  175432 system_pods.go:61] "metrics-server-f79f97bbb-bfqgl" [994d3cd1-03a9-4bc6-9d1f-726efac9bf56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 11:51:41.492231  175432 system_pods.go:61] "storage-provisioner" [ae729534-6a0a-45a8-82ab-cfcb49ba06a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:51:41.492241  175432 system_pods.go:74] duration metric: took 12.386239ms to wait for pod list to return data ...
	I0210 11:51:41.492250  175432 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:51:41.519350  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:51:41.519703  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 11:51:41.519723  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 11:51:41.545596  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 11:51:41.545625  175432 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 11:51:41.558654  175432 default_sa.go:45] found service account: "default"
	I0210 11:51:41.558684  175432 default_sa.go:55] duration metric: took 66.426419ms for default service account to be created ...
	I0210 11:51:41.558700  175432 kubeadm.go:582] duration metric: took 458.068792ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 11:51:41.558721  175432 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:51:41.572430  175432 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:51:41.572460  175432 node_conditions.go:123] node cpu capacity is 2
	I0210 11:51:41.572474  175432 node_conditions.go:105] duration metric: took 13.747435ms to run NodePressure ...
	I0210 11:51:41.572491  175432 start.go:241] waiting for startup goroutines ...
	I0210 11:51:41.605452  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 11:51:41.605489  175432 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 11:51:41.688747  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 11:51:41.688776  175432 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 11:51:41.726543  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:51:41.726571  175432 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 11:51:41.757822  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 11:51:41.757858  175432 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 11:51:41.771198  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:51:41.825047  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 11:51:41.825080  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 11:51:41.882686  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 11:51:41.882711  175432 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 11:51:41.921482  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 11:51:41.921509  175432 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 11:51:41.939640  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 11:51:41.939672  175432 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 11:51:41.962617  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 11:51:41.962646  175432 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 11:51:42.038983  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:51:42.039022  175432 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 11:51:42.124093  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:51:43.223401  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.70401283s)
	I0210 11:51:43.223470  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223483  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223510  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.740158145s)
	I0210 11:51:43.223551  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223567  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223789  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.223808  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.223818  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223825  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223882  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.223884  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.223899  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.223930  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223939  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.224164  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.224178  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.224236  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.224256  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.232594  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.232615  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.232981  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.233003  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.232998  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.308633  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.537378605s)
	I0210 11:51:43.308700  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.308717  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.309027  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.309053  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.309066  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.309075  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.309083  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.309347  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.309363  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.309374  175432 addons.go:479] Verifying addon metrics-server=true in "newest-cni-188461"
	I0210 11:51:43.556313  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.432154735s)
	I0210 11:51:43.556376  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.556405  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.556687  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.556729  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.556745  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.556755  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.556768  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.557141  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.557157  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.557176  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.558678  175432 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-188461 addons enable metrics-server
	
	I0210 11:51:43.559994  175432 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0210 11:51:43.561282  175432 addons.go:514] duration metric: took 2.460575953s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0210 11:51:43.561329  175432 start.go:246] waiting for cluster config update ...
	I0210 11:51:43.561346  175432 start.go:255] writing updated cluster config ...
	I0210 11:51:43.561735  175432 ssh_runner.go:195] Run: rm -f paused
	I0210 11:51:43.609808  175432 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 11:51:43.611600  175432 out.go:177] * Done! kubectl is now configured to use "newest-cni-188461" cluster and "default" namespace by default
	I0210 11:53:36.111959  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:53:36.112102  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:53:36.113706  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:53:36.113753  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:53:36.113855  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:53:36.114008  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:53:36.114159  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:53:36.114222  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:53:36.115928  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:53:36.116009  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:53:36.116086  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:53:36.116175  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:53:36.116231  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:53:36.116289  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:53:36.116335  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:53:36.116393  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:53:36.116446  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:53:36.116518  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:53:36.116583  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:53:36.116616  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:53:36.116668  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:53:36.116711  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:53:36.116762  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:53:36.116827  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:53:36.116886  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:53:36.116997  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:53:36.117109  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:53:36.117153  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:53:36.117218  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:53:36.118466  172785 out.go:235]   - Booting up control plane ...
	I0210 11:53:36.118539  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:53:36.118608  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:53:36.118679  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:53:36.118787  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:53:36.118909  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:53:36.118953  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:53:36.119006  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119163  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119240  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119382  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119444  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119585  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119661  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119821  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119883  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.120101  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.120114  172785 kubeadm.go:310] 
	I0210 11:53:36.120147  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:53:36.120183  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:53:36.120193  172785 kubeadm.go:310] 
	I0210 11:53:36.120226  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:53:36.120255  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:53:36.120349  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:53:36.120362  172785 kubeadm.go:310] 
	I0210 11:53:36.120468  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:53:36.120512  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:53:36.120543  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:53:36.120549  172785 kubeadm.go:310] 
	I0210 11:53:36.120653  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:53:36.120728  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:53:36.120736  172785 kubeadm.go:310] 
	I0210 11:53:36.120858  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:53:36.120980  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:53:36.121098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:53:36.121214  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:53:36.121256  172785 kubeadm.go:310] 
	W0210 11:53:36.121387  172785 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 11:53:36.121446  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:53:41.570804  172785 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.449332067s)
	I0210 11:53:41.570881  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:53:41.583752  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:53:41.592553  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:53:41.592576  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:53:41.592626  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:53:41.600941  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:53:41.601000  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:53:41.609340  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:53:41.617464  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:53:41.617522  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:53:41.625988  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.633984  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:53:41.634044  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.642503  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:53:41.650425  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:53:41.650482  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:53:41.658856  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:53:41.860461  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:55:38.137554  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:55:38.137647  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:55:38.138863  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:55:38.138932  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:55:38.139057  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:55:38.139227  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:55:38.139319  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:55:38.139374  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:55:38.141121  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:55:38.141232  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:55:38.141287  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:55:38.141401  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:55:38.141504  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:55:38.141588  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:55:38.141677  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:55:38.141766  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:55:38.141863  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:55:38.141941  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:55:38.142049  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:55:38.142107  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:55:38.142188  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:55:38.142262  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:55:38.142343  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:55:38.142446  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:55:38.142524  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:55:38.142623  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:55:38.142733  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:55:38.142772  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:55:38.142847  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:55:38.144218  172785 out.go:235]   - Booting up control plane ...
	I0210 11:55:38.144323  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:55:38.144400  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:55:38.144457  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:55:38.144527  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:55:38.144671  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:55:38.144733  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:55:38.144843  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145077  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145155  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145321  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145403  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145599  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145696  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145874  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145956  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.146118  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.146130  172785 kubeadm.go:310] 
	I0210 11:55:38.146170  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:55:38.146213  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:55:38.146227  172785 kubeadm.go:310] 
	I0210 11:55:38.146286  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:55:38.146329  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:55:38.146481  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:55:38.146492  172785 kubeadm.go:310] 
	I0210 11:55:38.146597  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:55:38.146633  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:55:38.146662  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:55:38.146668  172785 kubeadm.go:310] 
	I0210 11:55:38.146752  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:55:38.146820  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:55:38.146830  172785 kubeadm.go:310] 
	I0210 11:55:38.146936  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:55:38.147020  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:55:38.147098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:55:38.147210  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:55:38.147271  172785 kubeadm.go:310] 
	I0210 11:55:38.147280  172785 kubeadm.go:394] duration metric: took 8m2.242182664s to StartCluster
	I0210 11:55:38.147337  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:55:38.147399  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:55:38.190552  172785 cri.go:89] found id: ""
	I0210 11:55:38.190585  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.190593  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:55:38.190601  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:55:38.190653  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:55:38.223994  172785 cri.go:89] found id: ""
	I0210 11:55:38.224030  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.224041  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:55:38.224050  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:55:38.224114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:55:38.254975  172785 cri.go:89] found id: ""
	I0210 11:55:38.255002  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.255013  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:55:38.255021  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:55:38.255087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:55:38.294383  172785 cri.go:89] found id: ""
	I0210 11:55:38.294412  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.294423  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:55:38.294431  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:55:38.294481  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:55:38.330915  172785 cri.go:89] found id: ""
	I0210 11:55:38.330943  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.330952  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:55:38.330958  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:55:38.331013  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:55:38.368811  172785 cri.go:89] found id: ""
	I0210 11:55:38.368841  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.368849  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:55:38.368856  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:55:38.368912  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:55:38.405782  172785 cri.go:89] found id: ""
	I0210 11:55:38.405809  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.405817  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:55:38.405822  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:55:38.405878  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:55:38.443286  172785 cri.go:89] found id: ""
	I0210 11:55:38.443313  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.443320  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:55:38.443331  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:55:38.443344  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:55:38.457513  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:55:38.457552  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:55:38.535390  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:55:38.535413  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:55:38.535425  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:55:38.644609  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:55:38.644644  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:55:38.708870  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:55:38.708900  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:55:38.771312  172785 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 11:55:38.771377  172785 out.go:270] * 
	W0210 11:55:38.771437  172785 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.771456  172785 out.go:270] * 
	W0210 11:55:38.772241  172785 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:55:38.775175  172785 out.go:201] 
	W0210 11:55:38.776401  172785 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.776449  172785 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 11:55:38.776467  172785 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 11:55:38.777818  172785 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.734298851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739188539734280360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca1da92c-89c8-41f3-9baa-82971661af8c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.734724223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27c6cfac-bb88-4b1f-840a-a98ad18c4e8b name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.734821808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27c6cfac-bb88-4b1f-840a-a98ad18c4e8b name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.734856535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=27c6cfac-bb88-4b1f-840a-a98ad18c4e8b name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.765559037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79057130-6853-4a9b-aa21-124aa414faee name=/runtime.v1.RuntimeService/Version
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.765649451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79057130-6853-4a9b-aa21-124aa414faee name=/runtime.v1.RuntimeService/Version
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.766620790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4e40634-3c91-491a-8f0e-1408ba564288 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.767063529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739188539767042010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4e40634-3c91-491a-8f0e-1408ba564288 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.767663883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fe03144-1913-475a-851f-52e20cf3ae09 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.767762967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fe03144-1913-475a-851f-52e20cf3ae09 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.767866828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4fe03144-1913-475a-851f-52e20cf3ae09 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.797005192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef03eb7c-4d35-4f1e-b9e8-cae2196be729 name=/runtime.v1.RuntimeService/Version
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.797085650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef03eb7c-4d35-4f1e-b9e8-cae2196be729 name=/runtime.v1.RuntimeService/Version
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.797993012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ebffdc7-3fdf-4907-9bf2-6441f1fe21c6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.798349899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739188539798322373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ebffdc7-3fdf-4907-9bf2-6441f1fe21c6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.798770444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b37df7b-77e2-4a9a-ad75-42671e176314 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.798871291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b37df7b-77e2-4a9a-ad75-42671e176314 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.798902932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2b37df7b-77e2-4a9a-ad75-42671e176314 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.829351500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d88e5ad7-f8d9-492d-adaf-1adcfa88035b name=/runtime.v1.RuntimeService/Version
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.829428404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d88e5ad7-f8d9-492d-adaf-1adcfa88035b name=/runtime.v1.RuntimeService/Version
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.830464350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42b0988a-7b83-4f55-9132-883904ad388c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.830887606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739188539830858032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42b0988a-7b83-4f55-9132-883904ad388c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.831336923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7d9dd9c-5bbe-4e03-9494-ad65cb2369c8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.831396758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7d9dd9c-5bbe-4e03-9494-ad65cb2369c8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 11:55:39 old-k8s-version-510006 crio[632]: time="2025-02-10 11:55:39.831438821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b7d9dd9c-5bbe-4e03-9494-ad65cb2369c8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054289] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039411] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.995296] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.082058] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.584320] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.340922] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.062802] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054806] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.152386] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.133625] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.265093] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.059229] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.067098] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.246980] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[ +12.002986] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 11:51] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Feb10 11:53] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.060734] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:55:39 up 8 min,  0 users,  load average: 0.02, 0.11, 0.08
	Linux old-k8s-version-510006 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007a46f0)
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007d7ef0, 0x4f0ac20, 0xc000118f50, 0x1, 0xc0001000c0)
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024ce00, 0xc0001000c0)
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000d9c000, 0xc000c67ca0)
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5476]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Feb 10 11:55:38 old-k8s-version-510006 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 11:55:38 old-k8s-version-510006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 11:55:38 old-k8s-version-510006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 10 11:55:38 old-k8s-version-510006 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 11:55:38 old-k8s-version-510006 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5538]: I0210 11:55:38.760394    5538 server.go:416] Version: v1.20.0
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5538]: I0210 11:55:38.760663    5538 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5538]: I0210 11:55:38.762314    5538 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5538]: W0210 11:55:38.763168    5538 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 10 11:55:38 old-k8s-version-510006 kubelet[5538]: I0210 11:55:38.763448    5538 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (231.834945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-510006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (512.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:55:53.023782  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:56:02.063819  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:56:05.877306  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/default-k8s-diff-port-448087/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:56:15.138735  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:57:44.325385  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/no-preload-484935/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:57:54.943644  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:58:12.029562  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/no-preload-484935/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:58:22.014379  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/default-k8s-diff-port-448087/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:58:32.416024  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:58:49.719171  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/default-k8s-diff-port-448087/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:59:06.277096  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:59:16.620921  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:59:18.008611  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:59:35.344739  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 11:59:55.480994  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:00:04.561781  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:00:39.686850  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:00:53.023220  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:00:58.408065  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:01:02.063562  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:01:15.138360  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:01:27.629204  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:02:25.126524  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:02:38.201909  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:02:44.324844  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/no-preload-484935/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:02:54.942984  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:03:22.013936  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/default-k8s-diff-port-448087/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:03:32.415472  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:03:56.106597  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:04:06.276286  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:04:16.621674  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:04:35.343771  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (232.06346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-510006" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (226.41248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-510006 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-413450 image list                          | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| delete  | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| start   | -p newest-cni-188461 --memory=2200 --alsologtostderr   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-484935 image list                           | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| delete  | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| addons  | enable metrics-server -p newest-cni-188461             | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-448087                           | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-188461                  | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-188461 --memory=2200 --alsologtostderr   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-188461 image list                           | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	| delete  | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 11:51:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 11:51:05.820340  175432 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:51:05.820502  175432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:51:05.820516  175432 out.go:358] Setting ErrFile to fd 2...
	I0210 11:51:05.820523  175432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:51:05.820766  175432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:51:05.821523  175432 out.go:352] Setting JSON to false
	I0210 11:51:05.822831  175432 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9208,"bootTime":1739179058,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:51:05.822988  175432 start.go:139] virtualization: kvm guest
	I0210 11:51:05.825163  175432 out.go:177] * [newest-cni-188461] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:51:05.826457  175432 notify.go:220] Checking for updates...
	I0210 11:51:05.826494  175432 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:51:05.827767  175432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:51:05.828893  175432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:05.830154  175432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:51:05.831155  175432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:51:05.832181  175432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:51:05.833664  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:05.834109  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.834167  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.849261  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0210 11:51:05.849766  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.850430  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.850466  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.850929  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.851149  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.851442  175432 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:51:05.851738  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.851794  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.867715  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0210 11:51:05.868207  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.868793  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.868820  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.869239  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.869480  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.906409  175432 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 11:51:05.907615  175432 start.go:297] selected driver: kvm2
	I0210 11:51:05.907629  175432 start.go:901] validating driver "kvm2" against &{Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:05.907767  175432 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:51:05.908475  175432 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:51:05.908568  175432 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:51:05.924427  175432 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:51:05.924814  175432 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 11:51:05.924842  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:05.924873  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:05.924904  175432 start.go:340] cluster config:
	{Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:05.925004  175432 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:51:05.926563  175432 out.go:177] * Starting "newest-cni-188461" primary control-plane node in "newest-cni-188461" cluster
	I0210 11:51:05.927651  175432 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:51:05.927697  175432 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 11:51:05.927710  175432 cache.go:56] Caching tarball of preloaded images
	I0210 11:51:05.927792  175432 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 11:51:05.927808  175432 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 11:51:05.927910  175432 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/config.json ...
	I0210 11:51:05.928134  175432 start.go:360] acquireMachinesLock for newest-cni-188461: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:51:05.928183  175432 start.go:364] duration metric: took 27.306µs to acquireMachinesLock for "newest-cni-188461"
	I0210 11:51:05.928204  175432 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:51:05.928212  175432 fix.go:54] fixHost starting: 
	I0210 11:51:05.928550  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.928590  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.944316  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0210 11:51:05.944759  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.945287  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.945316  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.945647  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.945896  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.946092  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:05.947956  175432 fix.go:112] recreateIfNeeded on newest-cni-188461: state=Stopped err=<nil>
	I0210 11:51:05.948006  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	W0210 11:51:05.948163  175432 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:51:05.950073  175432 out.go:177] * Restarting existing kvm2 VM for "newest-cni-188461" ...
	I0210 11:51:02.699759  172785 cri.go:89] found id: ""
	I0210 11:51:02.699826  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.699843  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:02.699853  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:02.699915  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:02.735317  172785 cri.go:89] found id: ""
	I0210 11:51:02.735346  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.735354  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:02.735360  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:02.735410  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:02.765670  172785 cri.go:89] found id: ""
	I0210 11:51:02.765697  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.765704  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:02.765710  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:02.765759  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:02.797404  172785 cri.go:89] found id: ""
	I0210 11:51:02.797435  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.797448  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:02.797456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:02.797515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:02.829414  172785 cri.go:89] found id: ""
	I0210 11:51:02.829448  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.829459  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:02.829471  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:02.829487  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:02.880066  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:02.880105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:02.893239  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:02.893274  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:02.971736  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:02.971766  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:02.971782  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:03.046928  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:03.046967  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.590932  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:05.604033  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:05.604091  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:05.640343  172785 cri.go:89] found id: ""
	I0210 11:51:05.640374  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.640383  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:05.640391  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:05.640441  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:05.676294  172785 cri.go:89] found id: ""
	I0210 11:51:05.676319  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.676326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:05.676331  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:05.676371  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:05.708986  172785 cri.go:89] found id: ""
	I0210 11:51:05.709016  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.709026  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:05.709034  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:05.709087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:05.741689  172785 cri.go:89] found id: ""
	I0210 11:51:05.741714  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.741722  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:05.741728  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:05.741769  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:05.774470  172785 cri.go:89] found id: ""
	I0210 11:51:05.774496  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.774506  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:05.774514  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:05.774571  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:05.806632  172785 cri.go:89] found id: ""
	I0210 11:51:05.806659  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.806669  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:05.806676  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:05.806725  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:05.849963  172785 cri.go:89] found id: ""
	I0210 11:51:05.849987  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.850001  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:05.850012  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:05.850068  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:05.888840  172785 cri.go:89] found id: ""
	I0210 11:51:05.888870  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.888880  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:05.888893  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:05.888907  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.930082  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:05.930105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:05.985122  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:05.985156  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:06.000022  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:06.000051  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:06.080268  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:06.080290  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:06.080305  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:05.951396  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Start
	I0210 11:51:05.951587  175432 main.go:141] libmachine: (newest-cni-188461) starting domain...
	I0210 11:51:05.951605  175432 main.go:141] libmachine: (newest-cni-188461) ensuring networks are active...
	I0210 11:51:05.952431  175432 main.go:141] libmachine: (newest-cni-188461) Ensuring network default is active
	I0210 11:51:05.952804  175432 main.go:141] libmachine: (newest-cni-188461) Ensuring network mk-newest-cni-188461 is active
	I0210 11:51:05.953275  175432 main.go:141] libmachine: (newest-cni-188461) getting domain XML...
	I0210 11:51:05.954033  175432 main.go:141] libmachine: (newest-cni-188461) creating domain...
	I0210 11:51:07.158707  175432 main.go:141] libmachine: (newest-cni-188461) waiting for IP...
	I0210 11:51:07.159498  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.159846  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.159937  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.159839  175468 retry.go:31] will retry after 306.733597ms: waiting for domain to come up
	I0210 11:51:07.468485  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.468938  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.468960  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.468906  175468 retry.go:31] will retry after 340.921152ms: waiting for domain to come up
	I0210 11:51:07.811449  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.811899  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.811930  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.811856  175468 retry.go:31] will retry after 454.621787ms: waiting for domain to come up
	I0210 11:51:08.268622  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:08.269162  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:08.269193  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:08.269129  175468 retry.go:31] will retry after 544.066974ms: waiting for domain to come up
	I0210 11:51:08.815072  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:08.815779  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:08.815813  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:08.815728  175468 retry.go:31] will retry after 715.223482ms: waiting for domain to come up
	I0210 11:51:09.532634  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:09.533080  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:09.533105  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:09.533047  175468 retry.go:31] will retry after 919.550163ms: waiting for domain to come up
	I0210 11:51:10.453662  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:10.454148  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:10.454184  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:10.454112  175468 retry.go:31] will retry after 1.132151714s: waiting for domain to come up
	I0210 11:51:08.668417  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:08.681333  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:08.681391  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:08.716394  172785 cri.go:89] found id: ""
	I0210 11:51:08.716427  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.716435  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:08.716442  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:08.716492  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:08.752135  172785 cri.go:89] found id: ""
	I0210 11:51:08.752161  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.752170  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:08.752175  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:08.752222  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:08.785404  172785 cri.go:89] found id: ""
	I0210 11:51:08.785430  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.785438  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:08.785443  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:08.785506  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:08.816938  172785 cri.go:89] found id: ""
	I0210 11:51:08.816965  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.816977  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:08.816986  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:08.817078  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:08.850791  172785 cri.go:89] found id: ""
	I0210 11:51:08.850827  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.850838  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:08.850847  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:08.850905  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:08.887566  172785 cri.go:89] found id: ""
	I0210 11:51:08.887602  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.887615  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:08.887623  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:08.887686  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:08.921347  172785 cri.go:89] found id: ""
	I0210 11:51:08.921389  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.921397  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:08.921404  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:08.921462  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:08.954704  172785 cri.go:89] found id: ""
	I0210 11:51:08.954738  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.954750  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:08.954762  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:08.954777  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:09.004897  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:09.004932  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:09.020413  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:09.020440  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:09.093835  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:09.093861  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:09.093874  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:09.174312  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:09.174355  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:11.710924  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:11.722908  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:11.722976  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:11.756702  172785 cri.go:89] found id: ""
	I0210 11:51:11.756744  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.756757  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:11.756765  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:11.756839  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:11.787281  172785 cri.go:89] found id: ""
	I0210 11:51:11.787315  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.787326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:11.787334  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:11.787407  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:11.817416  172785 cri.go:89] found id: ""
	I0210 11:51:11.817443  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.817451  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:11.817456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:11.817508  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:11.847209  172785 cri.go:89] found id: ""
	I0210 11:51:11.847241  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.847253  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:11.847260  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:11.847326  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:11.883365  172785 cri.go:89] found id: ""
	I0210 11:51:11.883395  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.883403  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:11.883408  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:11.883457  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:11.919812  172785 cri.go:89] found id: ""
	I0210 11:51:11.919840  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.919847  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:11.919854  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:11.919901  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:11.961310  172785 cri.go:89] found id: ""
	I0210 11:51:11.961348  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.961359  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:11.961366  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:11.961443  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:11.999667  172785 cri.go:89] found id: ""
	I0210 11:51:11.999701  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.999709  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:11.999718  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:11.999730  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:12.049284  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:12.049320  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:12.062044  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:12.062073  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:12.126307  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:12.126334  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:12.126351  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:12.215334  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:12.215382  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:11.587837  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:11.588448  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:11.588474  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:11.588419  175468 retry.go:31] will retry after 1.04294927s: waiting for domain to come up
	I0210 11:51:12.632697  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:12.633143  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:12.633181  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:12.633127  175468 retry.go:31] will retry after 1.81651321s: waiting for domain to come up
	I0210 11:51:14.452121  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:14.452630  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:14.452696  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:14.452603  175468 retry.go:31] will retry after 2.010851888s: waiting for domain to come up
	I0210 11:51:14.752711  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:14.765091  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:14.765158  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:14.796318  172785 cri.go:89] found id: ""
	I0210 11:51:14.796352  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.796362  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:14.796371  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:14.796438  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:14.826452  172785 cri.go:89] found id: ""
	I0210 11:51:14.826484  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.826493  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:14.826501  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:14.826566  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:14.859861  172785 cri.go:89] found id: ""
	I0210 11:51:14.859890  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.859898  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:14.859904  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:14.859965  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:14.893708  172785 cri.go:89] found id: ""
	I0210 11:51:14.893740  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.893748  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:14.893755  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:14.893820  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:14.925870  172785 cri.go:89] found id: ""
	I0210 11:51:14.925897  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.925905  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:14.925911  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:14.925977  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:14.960528  172785 cri.go:89] found id: ""
	I0210 11:51:14.960554  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.960562  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:14.960567  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:14.960630  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:14.992831  172785 cri.go:89] found id: ""
	I0210 11:51:14.992859  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.992867  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:14.992874  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:14.992934  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:15.026146  172785 cri.go:89] found id: ""
	I0210 11:51:15.026182  172785 logs.go:282] 0 containers: []
	W0210 11:51:15.026193  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:15.026203  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:15.026217  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:15.074502  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:15.074537  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:15.087671  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:15.087713  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:15.152959  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:15.152984  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:15.153000  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:15.225042  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:15.225082  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:16.465454  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:16.465905  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:16.465953  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:16.465902  175468 retry.go:31] will retry after 2.06317351s: waiting for domain to come up
	I0210 11:51:18.530291  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:18.530745  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:18.530777  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:18.530719  175468 retry.go:31] will retry after 3.12374249s: waiting for domain to come up
	I0210 11:51:17.763634  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:17.776970  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:17.777038  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:17.810704  172785 cri.go:89] found id: ""
	I0210 11:51:17.810736  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.810747  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:17.810755  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:17.810814  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:17.845216  172785 cri.go:89] found id: ""
	I0210 11:51:17.845242  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.845251  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:17.845257  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:17.845316  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:17.877621  172785 cri.go:89] found id: ""
	I0210 11:51:17.877652  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.877668  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:17.877675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:17.877737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:17.908704  172785 cri.go:89] found id: ""
	I0210 11:51:17.908730  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.908739  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:17.908744  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:17.908792  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:17.943857  172785 cri.go:89] found id: ""
	I0210 11:51:17.943887  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.943896  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:17.943902  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:17.943952  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:17.974965  172785 cri.go:89] found id: ""
	I0210 11:51:17.974998  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.975010  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:17.975018  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:17.975085  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:18.006248  172785 cri.go:89] found id: ""
	I0210 11:51:18.006282  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.006292  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:18.006300  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:18.006360  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:18.036899  172785 cri.go:89] found id: ""
	I0210 11:51:18.036943  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.036954  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:18.036967  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:18.036982  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:18.049026  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:18.049054  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:18.111425  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:18.111452  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:18.111464  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:18.185158  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:18.185198  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:18.220425  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:18.220458  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:20.771952  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:20.784242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:20.784303  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:20.815676  172785 cri.go:89] found id: ""
	I0210 11:51:20.815702  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.815709  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:20.815715  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:20.815773  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:20.845540  172785 cri.go:89] found id: ""
	I0210 11:51:20.845573  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.845583  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:20.845592  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:20.845654  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:20.875046  172785 cri.go:89] found id: ""
	I0210 11:51:20.875077  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.875086  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:20.875092  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:20.875150  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:20.905636  172785 cri.go:89] found id: ""
	I0210 11:51:20.905662  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.905670  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:20.905675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:20.905722  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:20.935907  172785 cri.go:89] found id: ""
	I0210 11:51:20.935938  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.935948  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:20.935955  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:20.936028  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:20.965345  172785 cri.go:89] found id: ""
	I0210 11:51:20.965375  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.965386  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:20.965395  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:20.965464  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:20.995608  172785 cri.go:89] found id: ""
	I0210 11:51:20.995637  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.995646  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:20.995651  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:20.995712  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:21.025886  172785 cri.go:89] found id: ""
	I0210 11:51:21.025914  172785 logs.go:282] 0 containers: []
	W0210 11:51:21.025923  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:21.025932  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:21.025946  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:21.074578  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:21.074617  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:21.087795  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:21.087825  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:21.151479  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:21.151505  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:21.151520  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:21.228563  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:21.228613  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:21.655587  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:21.656261  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:21.656284  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:21.655989  175468 retry.go:31] will retry after 4.241425857s: waiting for domain to come up
	I0210 11:51:23.769730  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:23.781806  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:23.781877  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:23.812884  172785 cri.go:89] found id: ""
	I0210 11:51:23.812912  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.812920  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:23.812926  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:23.812975  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:23.844665  172785 cri.go:89] found id: ""
	I0210 11:51:23.844700  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.844708  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:23.844713  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:23.844764  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:23.879613  172785 cri.go:89] found id: ""
	I0210 11:51:23.879642  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.879651  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:23.879657  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:23.879711  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:23.911425  172785 cri.go:89] found id: ""
	I0210 11:51:23.911452  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.911459  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:23.911465  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:23.911515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:23.944567  172785 cri.go:89] found id: ""
	I0210 11:51:23.944601  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.944610  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:23.944617  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:23.944669  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:23.974980  172785 cri.go:89] found id: ""
	I0210 11:51:23.975008  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.975016  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:23.975022  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:23.975074  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:24.006450  172785 cri.go:89] found id: ""
	I0210 11:51:24.006484  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.006492  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:24.006499  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:24.006563  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:24.037483  172785 cri.go:89] found id: ""
	I0210 11:51:24.037521  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.037533  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:24.037545  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:24.037560  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:24.049887  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:24.049921  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:24.117589  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:24.117615  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:24.117628  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:24.193737  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:24.193775  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:24.230256  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:24.230287  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:26.780045  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:26.792355  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:26.792446  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:26.826505  172785 cri.go:89] found id: ""
	I0210 11:51:26.826536  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.826544  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:26.826550  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:26.826601  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:26.865128  172785 cri.go:89] found id: ""
	I0210 11:51:26.865172  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.865185  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:26.865193  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:26.865259  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:26.897605  172785 cri.go:89] found id: ""
	I0210 11:51:26.897636  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.897644  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:26.897650  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:26.897699  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:26.930033  172785 cri.go:89] found id: ""
	I0210 11:51:26.930067  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.930079  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:26.930089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:26.930151  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:26.963458  172785 cri.go:89] found id: ""
	I0210 11:51:26.963497  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.963509  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:26.963519  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:26.963586  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:26.993022  172785 cri.go:89] found id: ""
	I0210 11:51:26.993051  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.993058  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:26.993065  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:26.993114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:27.029713  172785 cri.go:89] found id: ""
	I0210 11:51:27.029756  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.029768  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:27.029776  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:27.029838  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:27.065917  172785 cri.go:89] found id: ""
	I0210 11:51:27.065952  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.065962  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:27.065976  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:27.065988  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:27.127397  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:27.127435  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:27.140024  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:27.140055  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:27.218604  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:27.218625  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:27.218639  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:27.293606  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:27.293645  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:25.902358  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.902836  175432 main.go:141] libmachine: (newest-cni-188461) found domain IP: 192.168.39.24
	I0210 11:51:25.902861  175432 main.go:141] libmachine: (newest-cni-188461) reserving static IP address...
	I0210 11:51:25.902877  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has current primary IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.903373  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "newest-cni-188461", mac: "52:54:00:25:fb:1e", ip: "192.168.39.24"} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:25.903414  175432 main.go:141] libmachine: (newest-cni-188461) DBG | skip adding static IP to network mk-newest-cni-188461 - found existing host DHCP lease matching {name: "newest-cni-188461", mac: "52:54:00:25:fb:1e", ip: "192.168.39.24"}
	I0210 11:51:25.903432  175432 main.go:141] libmachine: (newest-cni-188461) reserved static IP address 192.168.39.24 for domain newest-cni-188461
	I0210 11:51:25.903450  175432 main.go:141] libmachine: (newest-cni-188461) waiting for SSH...
	I0210 11:51:25.903464  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Getting to WaitForSSH function...
	I0210 11:51:25.905574  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.905915  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:25.905949  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.906037  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Using SSH client type: external
	I0210 11:51:25.906082  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa (-rw-------)
	I0210 11:51:25.906117  175432 main.go:141] libmachine: (newest-cni-188461) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:51:25.906133  175432 main.go:141] libmachine: (newest-cni-188461) DBG | About to run SSH command:
	I0210 11:51:25.906142  175432 main.go:141] libmachine: (newest-cni-188461) DBG | exit 0
	I0210 11:51:26.026989  175432 main.go:141] libmachine: (newest-cni-188461) DBG | SSH cmd err, output: <nil>: 
	I0210 11:51:26.027395  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetConfigRaw
	I0210 11:51:26.028030  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:26.030814  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.031285  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.031323  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.031552  175432 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/config.json ...
	I0210 11:51:26.031826  175432 machine.go:93] provisionDockerMachine start ...
	I0210 11:51:26.031852  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:26.032077  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.034420  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.034744  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.034774  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.034906  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.035078  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.035233  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.035365  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.035514  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.035757  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.035775  175432 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:51:26.135247  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:51:26.135280  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.135565  175432 buildroot.go:166] provisioning hostname "newest-cni-188461"
	I0210 11:51:26.135601  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.135800  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.138386  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.138722  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.138760  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.138918  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.139103  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.139257  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.139396  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.139525  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.139740  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.139760  175432 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-188461 && echo "newest-cni-188461" | sudo tee /etc/hostname
	I0210 11:51:26.252653  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-188461
	
	I0210 11:51:26.252681  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.255333  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.255649  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.255683  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.255832  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.256043  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.256209  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.256316  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.256451  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.256607  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.256621  175432 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-188461' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-188461/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-188461' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:51:26.367365  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:51:26.367412  175432 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:51:26.367489  175432 buildroot.go:174] setting up certificates
	I0210 11:51:26.367512  175432 provision.go:84] configureAuth start
	I0210 11:51:26.367534  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.367839  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:26.370685  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.371061  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.371093  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.371229  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.373420  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.373836  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.373880  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.373983  175432 provision.go:143] copyHostCerts
	I0210 11:51:26.374051  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:51:26.374065  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:51:26.374133  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:51:26.374276  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:51:26.374287  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:51:26.374313  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:51:26.374367  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:51:26.374375  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:51:26.374397  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:51:26.374449  175432 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.newest-cni-188461 san=[127.0.0.1 192.168.39.24 localhost minikube newest-cni-188461]
	I0210 11:51:26.560219  175432 provision.go:177] copyRemoteCerts
	I0210 11:51:26.560295  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:51:26.560322  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.562789  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.563081  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.563110  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.563305  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.563539  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.563695  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.563849  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:26.644785  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:51:26.666689  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0210 11:51:26.688226  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:51:26.709285  175432 provision.go:87] duration metric: took 341.756699ms to configureAuth
	I0210 11:51:26.709309  175432 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:51:26.709474  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:26.709553  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.712093  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.712454  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.712485  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.712651  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.712862  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.713012  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.713160  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.713286  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.713469  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.713490  175432 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:51:26.936519  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:51:26.936549  175432 machine.go:96] duration metric: took 904.704645ms to provisionDockerMachine
	I0210 11:51:26.936563  175432 start.go:293] postStartSetup for "newest-cni-188461" (driver="kvm2")
	I0210 11:51:26.936577  175432 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:51:26.936604  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:26.936940  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:51:26.936977  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.939826  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.940192  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.940237  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.940341  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.940583  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.940763  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.940960  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.026462  175432 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:51:27.031688  175432 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:51:27.031709  175432 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:51:27.031773  175432 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:51:27.031842  175432 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:51:27.031934  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:51:27.044721  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:51:27.074068  175432 start.go:296] duration metric: took 137.488029ms for postStartSetup
	I0210 11:51:27.074125  175432 fix.go:56] duration metric: took 21.145913922s for fixHost
	I0210 11:51:27.074147  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.077156  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.077642  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.077674  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.077899  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.078079  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.078248  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.078349  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.078477  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:27.078645  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:27.078655  175432 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:51:27.189002  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739188287.148629499
	
	I0210 11:51:27.189035  175432 fix.go:216] guest clock: 1739188287.148629499
	I0210 11:51:27.189046  175432 fix.go:229] Guest: 2025-02-10 11:51:27.148629499 +0000 UTC Remote: 2025-02-10 11:51:27.074130149 +0000 UTC m=+21.295255642 (delta=74.49935ms)
	I0210 11:51:27.189075  175432 fix.go:200] guest clock delta is within tolerance: 74.49935ms
	I0210 11:51:27.189098  175432 start.go:83] releasing machines lock for "newest-cni-188461", held for 21.260901149s
	I0210 11:51:27.189149  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.189435  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:27.192197  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.192662  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.192691  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.192835  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193427  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193607  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193731  175432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:51:27.193784  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.193815  175432 ssh_runner.go:195] Run: cat /version.json
	I0210 11:51:27.193843  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.196421  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.196581  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.196952  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.196982  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.197011  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.197027  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.197119  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.197229  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.197348  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.197432  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.197512  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.197578  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.197673  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.197762  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.309501  175432 ssh_runner.go:195] Run: systemctl --version
	I0210 11:51:27.315451  175432 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:51:27.461369  175432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:51:27.467018  175432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:51:27.467094  175432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:51:27.482133  175432 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:51:27.482163  175432 start.go:495] detecting cgroup driver to use...
	I0210 11:51:27.482234  175432 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:51:27.497192  175432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:51:27.510105  175432 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:51:27.510161  175432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:51:27.523916  175432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:51:27.537043  175432 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:51:27.652244  175432 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:51:27.798511  175432 docker.go:233] disabling docker service ...
	I0210 11:51:27.798592  175432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:51:27.812301  175432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:51:27.824217  175432 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:51:27.953601  175432 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:51:28.082863  175432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:51:28.095446  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:51:28.111945  175432 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 11:51:28.112013  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.121412  175432 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:51:28.121479  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.130512  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.139646  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.148613  175432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:51:28.157806  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.166775  175432 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.181698  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.190623  175432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:51:28.198803  175432 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:51:28.198866  175432 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:51:28.210820  175432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:51:28.219005  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:28.334861  175432 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:51:28.416349  175432 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:51:28.416439  175432 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:51:28.421694  175432 start.go:563] Will wait 60s for crictl version
	I0210 11:51:28.421766  175432 ssh_runner.go:195] Run: which crictl
	I0210 11:51:28.425209  175432 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:51:28.469947  175432 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:51:28.470045  175432 ssh_runner.go:195] Run: crio --version
	I0210 11:51:28.501926  175432 ssh_runner.go:195] Run: crio --version
	I0210 11:51:28.529983  175432 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 11:51:28.531238  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:28.534202  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:28.534482  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:28.534503  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:28.534753  175432 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 11:51:28.538726  175432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:51:28.552133  175432 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0210 11:51:28.553249  175432 kubeadm.go:883] updating cluster {Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:51:28.553380  175432 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:51:28.553432  175432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:51:28.586300  175432 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 11:51:28.586363  175432 ssh_runner.go:195] Run: which lz4
	I0210 11:51:28.589827  175432 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:51:28.593533  175432 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:51:28.593560  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 11:51:29.799950  175432 crio.go:462] duration metric: took 1.21014347s to copy over tarball
	I0210 11:51:29.800045  175432 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:51:29.829516  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:29.841844  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:29.841926  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:29.877623  172785 cri.go:89] found id: ""
	I0210 11:51:29.877659  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.877671  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:29.877681  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:29.877755  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:29.917643  172785 cri.go:89] found id: ""
	I0210 11:51:29.917675  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.917687  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:29.917695  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:29.917761  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:29.963649  172785 cri.go:89] found id: ""
	I0210 11:51:29.963674  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.963682  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:29.963687  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:29.963737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:30.002084  172785 cri.go:89] found id: ""
	I0210 11:51:30.002113  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.002123  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:30.002131  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:30.002195  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:30.033435  172785 cri.go:89] found id: ""
	I0210 11:51:30.033462  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.033470  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:30.033476  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:30.033527  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:30.066494  172785 cri.go:89] found id: ""
	I0210 11:51:30.066531  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.066544  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:30.066553  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:30.066631  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:30.106190  172785 cri.go:89] found id: ""
	I0210 11:51:30.106224  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.106235  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:30.106242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:30.106307  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:30.138747  172785 cri.go:89] found id: ""
	I0210 11:51:30.138783  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.138794  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:30.138806  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:30.138821  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:30.186179  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:30.186214  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:30.239040  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:30.239098  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:30.251790  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:30.251833  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:30.331476  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:30.331510  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:30.331526  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:31.868684  175432 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068598843s)
	I0210 11:51:31.868722  175432 crio.go:469] duration metric: took 2.068733654s to extract the tarball
	I0210 11:51:31.868734  175432 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:51:31.905043  175432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:51:31.949467  175432 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 11:51:31.949495  175432 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:51:31.949506  175432 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.32.1 crio true true} ...
	I0210 11:51:31.949635  175432 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-188461 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:51:31.949725  175432 ssh_runner.go:195] Run: crio config
	I0210 11:51:31.995118  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:31.995138  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:31.995148  175432 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0210 11:51:31.995171  175432 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-188461 NodeName:newest-cni-188461 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:51:31.995327  175432 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-188461"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:51:31.995401  175432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:51:32.004538  175432 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:51:32.004595  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:51:32.013199  175432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0210 11:51:32.028077  175432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:51:32.042573  175432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0210 11:51:32.058002  175432 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0210 11:51:32.061432  175432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:51:32.072627  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:32.186846  175432 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:51:32.202515  175432 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461 for IP: 192.168.39.24
	I0210 11:51:32.202534  175432 certs.go:194] generating shared ca certs ...
	I0210 11:51:32.202551  175432 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:32.202707  175432 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:51:32.202751  175432 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:51:32.202760  175432 certs.go:256] generating profile certs ...
	I0210 11:51:32.202851  175432 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/client.key
	I0210 11:51:32.202927  175432 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.key.972ab71d
	I0210 11:51:32.202971  175432 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.key
	I0210 11:51:32.203107  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:51:32.203160  175432 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:51:32.203176  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:51:32.203230  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:51:32.203260  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:51:32.203292  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:51:32.203349  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:51:32.203967  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:51:32.237448  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:51:32.265671  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:51:32.300282  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:51:32.321803  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 11:51:32.356159  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:51:32.384387  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:51:32.405311  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:51:32.426731  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:51:32.447878  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:51:32.468769  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:51:32.489529  175432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:51:32.504167  175432 ssh_runner.go:195] Run: openssl version
	I0210 11:51:32.509508  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:51:32.518871  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.522876  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.522932  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.528142  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:51:32.537270  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:51:32.546522  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.550499  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.550547  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.555659  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:51:32.564881  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:51:32.574099  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.578092  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.578136  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.583164  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:51:32.592213  175432 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:51:32.596194  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 11:51:32.601754  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 11:51:32.607136  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 11:51:32.612639  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 11:51:32.617866  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 11:51:32.623168  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 11:51:32.628580  175432 kubeadm.go:392] StartCluster: {Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:32.628663  175432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:51:32.628718  175432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:51:32.662324  175432 cri.go:89] found id: ""
	I0210 11:51:32.662406  175432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:51:32.671458  175432 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 11:51:32.671474  175432 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 11:51:32.671515  175432 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 11:51:32.680246  175432 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:51:32.680805  175432 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-188461" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:32.681030  175432 kubeconfig.go:62] /home/jenkins/minikube-integration/20385-109271/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-188461" cluster setting kubeconfig missing "newest-cni-188461" context setting]
	I0210 11:51:32.681433  175432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:32.682590  175432 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 11:51:32.690876  175432 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.24
	I0210 11:51:32.690920  175432 kubeadm.go:1160] stopping kube-system containers ...
	I0210 11:51:32.690932  175432 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 11:51:32.690971  175432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:51:32.722678  175432 cri.go:89] found id: ""
	I0210 11:51:32.722734  175432 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 11:51:32.737166  175432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:51:32.745716  175432 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:51:32.745735  175432 kubeadm.go:157] found existing configuration files:
	
	I0210 11:51:32.745774  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:51:32.753706  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:51:32.753748  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:51:32.761921  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:51:32.769684  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:51:32.769733  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:51:32.778027  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:51:32.785678  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:51:32.785720  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:51:32.793869  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:51:32.801704  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:51:32.801745  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:51:32.809777  175432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:51:32.817865  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:32.922655  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:33.799309  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.003678  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.061490  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.141205  175432 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:51:34.141278  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:34.641870  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.142005  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.641428  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:32.918871  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:32.932814  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:32.932871  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:32.968103  172785 cri.go:89] found id: ""
	I0210 11:51:32.968136  172785 logs.go:282] 0 containers: []
	W0210 11:51:32.968148  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:32.968155  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:32.968218  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:33.004341  172785 cri.go:89] found id: ""
	I0210 11:51:33.004373  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.004388  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:33.004395  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:33.004448  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:33.042028  172785 cri.go:89] found id: ""
	I0210 11:51:33.042063  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.042075  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:33.042083  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:33.042146  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:33.078050  172785 cri.go:89] found id: ""
	I0210 11:51:33.078075  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.078083  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:33.078089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:33.078138  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:33.114525  172785 cri.go:89] found id: ""
	I0210 11:51:33.114557  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.114566  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:33.114572  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:33.114642  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:33.149333  172785 cri.go:89] found id: ""
	I0210 11:51:33.149360  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.149368  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:33.149374  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:33.149442  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:33.180356  172785 cri.go:89] found id: ""
	I0210 11:51:33.180391  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.180399  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:33.180414  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:33.180466  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:33.216587  172785 cri.go:89] found id: ""
	I0210 11:51:33.216623  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.216634  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:33.216647  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:33.216663  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:33.249169  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:33.249202  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:33.298276  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:33.298313  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:33.310872  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:33.310898  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:33.383025  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:33.383053  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:33.383070  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:35.956363  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.968886  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:35.968960  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:36.000870  172785 cri.go:89] found id: ""
	I0210 11:51:36.000902  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.000911  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:36.000919  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:36.000969  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:36.034456  172785 cri.go:89] found id: ""
	I0210 11:51:36.034489  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.034501  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:36.034509  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:36.034573  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:36.076207  172785 cri.go:89] found id: ""
	I0210 11:51:36.076238  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.076250  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:36.076258  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:36.076323  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:36.123438  172785 cri.go:89] found id: ""
	I0210 11:51:36.123474  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.123485  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:36.123494  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:36.123561  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:36.157858  172785 cri.go:89] found id: ""
	I0210 11:51:36.157897  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.157909  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:36.157918  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:36.157986  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:36.195990  172785 cri.go:89] found id: ""
	I0210 11:51:36.196024  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.196035  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:36.196044  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:36.196110  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:36.229709  172785 cri.go:89] found id: ""
	I0210 11:51:36.229742  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.229754  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:36.229762  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:36.229828  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:36.263497  172785 cri.go:89] found id: ""
	I0210 11:51:36.263530  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.263544  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:36.263557  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:36.263575  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:36.323038  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:36.323075  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:36.339537  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:36.339565  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:36.415073  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:36.415103  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:36.415118  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:36.496333  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:36.496388  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:36.142283  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:36.642276  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:36.656745  175432 api_server.go:72] duration metric: took 2.515536249s to wait for apiserver process to appear ...
	I0210 11:51:36.656777  175432 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:51:36.656802  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.394390  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:51:39.394421  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:51:39.394436  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.437828  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:51:39.437873  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:51:39.657293  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.664873  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:51:39.664898  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:51:40.157233  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:40.162450  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:51:40.162480  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:51:40.657079  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:40.662355  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0210 11:51:40.672632  175432 api_server.go:141] control plane version: v1.32.1
	I0210 11:51:40.672663  175432 api_server.go:131] duration metric: took 4.015877097s to wait for apiserver health ...
	I0210 11:51:40.672674  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:40.672682  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:40.674230  175432 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 11:51:40.675515  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 11:51:40.714574  175432 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 11:51:40.761839  175432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:51:40.766154  175432 system_pods.go:59] 8 kube-system pods found
	I0210 11:51:40.766198  175432 system_pods.go:61] "coredns-668d6bf9bc-s8bdj" [b89cbee2-a27d-4c8e-950c-b9bb794dca2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 11:51:40.766211  175432 system_pods.go:61] "etcd-newest-cni-188461" [d3f5135e-dc27-4326-8b51-9273547f4ead] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 11:51:40.766222  175432 system_pods.go:61] "kube-apiserver-newest-cni-188461" [b2b151b6-34c2-45f9-b052-4978e1d4c4e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 11:51:40.766233  175432 system_pods.go:61] "kube-controller-manager-newest-cni-188461" [7c5ff0ac-2dd6-4de0-8533-de9235d7ecee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 11:51:40.766246  175432 system_pods.go:61] "kube-proxy-hnd7c" [211dd9a1-4677-4b30-a805-8c44aa78929a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 11:51:40.766259  175432 system_pods.go:61] "kube-scheduler-newest-cni-188461" [65a9946b-d333-4dca-8047-6243b2233902] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 11:51:40.766269  175432 system_pods.go:61] "metrics-server-f79f97bbb-bfqgl" [994d3cd1-03a9-4bc6-9d1f-726efac9bf56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 11:51:40.766285  175432 system_pods.go:61] "storage-provisioner" [ae729534-6a0a-45a8-82ab-cfcb49ba06a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:51:40.766295  175432 system_pods.go:74] duration metric: took 4.431457ms to wait for pod list to return data ...
	I0210 11:51:40.766308  175432 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:51:40.769411  175432 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:51:40.769438  175432 node_conditions.go:123] node cpu capacity is 2
	I0210 11:51:40.769451  175432 node_conditions.go:105] duration metric: took 3.132289ms to run NodePressure ...
	I0210 11:51:40.769473  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:41.086960  175432 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 11:51:41.098932  175432 ops.go:34] apiserver oom_adj: -16
	I0210 11:51:41.098960  175432 kubeadm.go:597] duration metric: took 8.427477491s to restartPrimaryControlPlane
	I0210 11:51:41.098972  175432 kubeadm.go:394] duration metric: took 8.470418783s to StartCluster
	I0210 11:51:41.098996  175432 settings.go:142] acquiring lock: {Name:mk1369a4cca9eaf53282144d4cb555c048db8e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:41.099098  175432 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:41.100320  175432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:41.100593  175432 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:51:41.100701  175432 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 11:51:41.100794  175432 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-188461"
	I0210 11:51:41.100803  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:41.100819  175432 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-188461"
	W0210 11:51:41.100827  175432 addons.go:247] addon storage-provisioner should already be in state true
	I0210 11:51:41.100817  175432 addons.go:69] Setting default-storageclass=true in profile "newest-cni-188461"
	I0210 11:51:41.100822  175432 addons.go:69] Setting metrics-server=true in profile "newest-cni-188461"
	I0210 11:51:41.100850  175432 addons.go:69] Setting dashboard=true in profile "newest-cni-188461"
	I0210 11:51:41.100852  175432 addons.go:238] Setting addon metrics-server=true in "newest-cni-188461"
	I0210 11:51:41.100860  175432 addons.go:238] Setting addon dashboard=true in "newest-cni-188461"
	I0210 11:51:41.100862  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.100863  175432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-188461"
	W0210 11:51:41.100868  175432 addons.go:247] addon dashboard should already be in state true
	W0210 11:51:41.100872  175432 addons.go:247] addon metrics-server should already be in state true
	I0210 11:51:41.100896  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.100896  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.101280  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101284  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101284  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101297  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101304  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101306  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101317  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101331  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.102551  175432 out.go:177] * Verifying Kubernetes components...
	I0210 11:51:41.104005  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:41.126954  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0210 11:51:41.126969  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34267
	I0210 11:51:41.126987  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0210 11:51:41.126957  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0210 11:51:41.127478  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.127629  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.127758  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.128041  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.128116  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128132  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128297  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128317  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128356  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128380  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128772  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128775  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128814  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128869  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128889  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.129376  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.129425  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.129664  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.129977  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.130022  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.130061  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.130084  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.130105  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.133045  175432 addons.go:238] Setting addon default-storageclass=true in "newest-cni-188461"
	W0210 11:51:41.133067  175432 addons.go:247] addon default-storageclass should already be in state true
	I0210 11:51:41.133099  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.133468  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.133505  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.151283  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0210 11:51:41.151844  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.152503  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.152516  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.152878  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.153060  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.154241  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0210 11:51:41.155099  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.155177  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.155659  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.155682  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.156073  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.156257  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.157422  175432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:51:41.157807  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.158807  175432 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:51:41.158829  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 11:51:41.158847  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.159480  175432 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 11:51:41.160731  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 11:51:41.160754  175432 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 11:51:41.160771  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.164823  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.165475  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.165588  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.165840  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.166026  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.166161  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.166279  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.166561  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.166895  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.166944  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.167071  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.167255  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37463
	I0210 11:51:41.167365  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.167586  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.167759  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.167785  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.168584  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.168608  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.168951  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.169176  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.170787  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0210 11:51:41.170957  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.171371  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.171901  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.171922  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.172307  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.172722  175432 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 11:51:41.172993  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.173038  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.174922  175432 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 11:51:39.040991  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:39.053214  172785 kubeadm.go:597] duration metric: took 4m3.101491896s to restartPrimaryControlPlane
	W0210 11:51:39.053293  172785 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 11:51:39.053321  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:51:39.522357  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:51:39.540499  172785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:51:39.553326  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:51:39.562786  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:51:39.562803  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:51:39.562852  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:51:39.573017  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:51:39.573078  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:51:39.581851  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:51:39.590590  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:51:39.590645  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:51:39.599653  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.608323  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:51:39.608385  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.617777  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:51:39.626714  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:51:39.626776  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:51:39.636522  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:51:39.840090  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:51:41.176022  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 11:51:41.176045  175432 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 11:51:41.176065  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.179317  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.179726  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.179749  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.179976  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.180142  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.180281  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.180389  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.191261  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0210 11:51:41.191669  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.192145  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.192168  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.192536  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.192736  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.194288  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.194490  175432 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 11:51:41.194509  175432 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 11:51:41.194523  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.197218  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.197921  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.197930  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.197948  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.198076  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.198218  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.198446  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.369336  175432 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:51:41.409927  175432 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:51:41.410008  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:41.469358  175432 api_server.go:72] duration metric: took 368.71941ms to wait for apiserver process to appear ...
	I0210 11:51:41.469394  175432 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:51:41.469421  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:41.478932  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0210 11:51:41.479821  175432 api_server.go:141] control plane version: v1.32.1
	I0210 11:51:41.479842  175432 api_server.go:131] duration metric: took 10.440148ms to wait for apiserver health ...
	I0210 11:51:41.479849  175432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:51:41.483318  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:51:41.492142  175432 system_pods.go:59] 8 kube-system pods found
	I0210 11:51:41.492175  175432 system_pods.go:61] "coredns-668d6bf9bc-s8bdj" [b89cbee2-a27d-4c8e-950c-b9bb794dca2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 11:51:41.492186  175432 system_pods.go:61] "etcd-newest-cni-188461" [d3f5135e-dc27-4326-8b51-9273547f4ead] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 11:51:41.492198  175432 system_pods.go:61] "kube-apiserver-newest-cni-188461" [b2b151b6-34c2-45f9-b052-4978e1d4c4e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 11:51:41.492205  175432 system_pods.go:61] "kube-controller-manager-newest-cni-188461" [7c5ff0ac-2dd6-4de0-8533-de9235d7ecee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 11:51:41.492211  175432 system_pods.go:61] "kube-proxy-hnd7c" [211dd9a1-4677-4b30-a805-8c44aa78929a] Running
	I0210 11:51:41.492217  175432 system_pods.go:61] "kube-scheduler-newest-cni-188461" [65a9946b-d333-4dca-8047-6243b2233902] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 11:51:41.492225  175432 system_pods.go:61] "metrics-server-f79f97bbb-bfqgl" [994d3cd1-03a9-4bc6-9d1f-726efac9bf56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 11:51:41.492231  175432 system_pods.go:61] "storage-provisioner" [ae729534-6a0a-45a8-82ab-cfcb49ba06a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:51:41.492241  175432 system_pods.go:74] duration metric: took 12.386239ms to wait for pod list to return data ...
	I0210 11:51:41.492250  175432 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:51:41.519350  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:51:41.519703  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 11:51:41.519723  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 11:51:41.545596  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 11:51:41.545625  175432 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 11:51:41.558654  175432 default_sa.go:45] found service account: "default"
	I0210 11:51:41.558684  175432 default_sa.go:55] duration metric: took 66.426419ms for default service account to be created ...
	I0210 11:51:41.558700  175432 kubeadm.go:582] duration metric: took 458.068792ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 11:51:41.558721  175432 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:51:41.572430  175432 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:51:41.572460  175432 node_conditions.go:123] node cpu capacity is 2
	I0210 11:51:41.572474  175432 node_conditions.go:105] duration metric: took 13.747435ms to run NodePressure ...
	I0210 11:51:41.572491  175432 start.go:241] waiting for startup goroutines ...
	I0210 11:51:41.605452  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 11:51:41.605489  175432 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 11:51:41.688747  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 11:51:41.688776  175432 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 11:51:41.726543  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:51:41.726571  175432 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 11:51:41.757822  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 11:51:41.757858  175432 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 11:51:41.771198  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:51:41.825047  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 11:51:41.825080  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 11:51:41.882686  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 11:51:41.882711  175432 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 11:51:41.921482  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 11:51:41.921509  175432 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 11:51:41.939640  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 11:51:41.939672  175432 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 11:51:41.962617  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 11:51:41.962646  175432 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 11:51:42.038983  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:51:42.039022  175432 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 11:51:42.124093  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:51:43.223401  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.70401283s)
	I0210 11:51:43.223470  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223483  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223510  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.740158145s)
	I0210 11:51:43.223551  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223567  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223789  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.223808  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.223818  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223825  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223882  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.223884  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.223899  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.223930  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223939  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.224164  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.224178  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.224236  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.224256  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.232594  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.232615  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.232981  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.233003  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.232998  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.308633  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.537378605s)
	I0210 11:51:43.308700  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.308717  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.309027  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.309053  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.309066  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.309075  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.309083  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.309347  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.309363  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.309374  175432 addons.go:479] Verifying addon metrics-server=true in "newest-cni-188461"
	I0210 11:51:43.556313  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.432154735s)
	I0210 11:51:43.556376  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.556405  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.556687  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.556729  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.556745  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.556755  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.556768  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.557141  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.557157  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.557176  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.558678  175432 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-188461 addons enable metrics-server
	
	I0210 11:51:43.559994  175432 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0210 11:51:43.561282  175432 addons.go:514] duration metric: took 2.460575953s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0210 11:51:43.561329  175432 start.go:246] waiting for cluster config update ...
	I0210 11:51:43.561346  175432 start.go:255] writing updated cluster config ...
	I0210 11:51:43.561735  175432 ssh_runner.go:195] Run: rm -f paused
	I0210 11:51:43.609808  175432 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 11:51:43.611600  175432 out.go:177] * Done! kubectl is now configured to use "newest-cni-188461" cluster and "default" namespace by default
	I0210 11:53:36.111959  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:53:36.112102  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:53:36.113706  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:53:36.113753  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:53:36.113855  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:53:36.114008  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:53:36.114159  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:53:36.114222  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:53:36.115928  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:53:36.116009  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:53:36.116086  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:53:36.116175  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:53:36.116231  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:53:36.116289  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:53:36.116335  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:53:36.116393  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:53:36.116446  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:53:36.116518  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:53:36.116583  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:53:36.116616  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:53:36.116668  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:53:36.116711  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:53:36.116762  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:53:36.116827  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:53:36.116886  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:53:36.116997  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:53:36.117109  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:53:36.117153  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:53:36.117218  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:53:36.118466  172785 out.go:235]   - Booting up control plane ...
	I0210 11:53:36.118539  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:53:36.118608  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:53:36.118679  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:53:36.118787  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:53:36.118909  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:53:36.118953  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:53:36.119006  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119163  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119240  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119382  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119444  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119585  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119661  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119821  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119883  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.120101  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.120114  172785 kubeadm.go:310] 
	I0210 11:53:36.120147  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:53:36.120183  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:53:36.120193  172785 kubeadm.go:310] 
	I0210 11:53:36.120226  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:53:36.120255  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:53:36.120349  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:53:36.120362  172785 kubeadm.go:310] 
	I0210 11:53:36.120468  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:53:36.120512  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:53:36.120543  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:53:36.120549  172785 kubeadm.go:310] 
	I0210 11:53:36.120653  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:53:36.120728  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:53:36.120736  172785 kubeadm.go:310] 
	I0210 11:53:36.120858  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:53:36.120980  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:53:36.121098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:53:36.121214  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:53:36.121256  172785 kubeadm.go:310] 
	W0210 11:53:36.121387  172785 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 11:53:36.121446  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:53:41.570804  172785 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.449332067s)
	I0210 11:53:41.570881  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:53:41.583752  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:53:41.592553  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:53:41.592576  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:53:41.592626  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:53:41.600941  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:53:41.601000  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:53:41.609340  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:53:41.617464  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:53:41.617522  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:53:41.625988  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.633984  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:53:41.634044  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.642503  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:53:41.650425  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:53:41.650482  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:53:41.658856  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:53:41.860461  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:55:38.137554  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:55:38.137647  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:55:38.138863  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:55:38.138932  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:55:38.139057  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:55:38.139227  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:55:38.139319  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:55:38.139374  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:55:38.141121  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:55:38.141232  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:55:38.141287  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:55:38.141401  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:55:38.141504  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:55:38.141588  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:55:38.141677  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:55:38.141766  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:55:38.141863  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:55:38.141941  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:55:38.142049  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:55:38.142107  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:55:38.142188  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:55:38.142262  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:55:38.142343  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:55:38.142446  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:55:38.142524  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:55:38.142623  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:55:38.142733  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:55:38.142772  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:55:38.142847  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:55:38.144218  172785 out.go:235]   - Booting up control plane ...
	I0210 11:55:38.144323  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:55:38.144400  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:55:38.144457  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:55:38.144527  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:55:38.144671  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:55:38.144733  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:55:38.144843  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145077  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145155  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145321  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145403  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145599  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145696  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145874  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145956  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.146118  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.146130  172785 kubeadm.go:310] 
	I0210 11:55:38.146170  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:55:38.146213  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:55:38.146227  172785 kubeadm.go:310] 
	I0210 11:55:38.146286  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:55:38.146329  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:55:38.146481  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:55:38.146492  172785 kubeadm.go:310] 
	I0210 11:55:38.146597  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:55:38.146633  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:55:38.146662  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:55:38.146668  172785 kubeadm.go:310] 
	I0210 11:55:38.146752  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:55:38.146820  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:55:38.146830  172785 kubeadm.go:310] 
	I0210 11:55:38.146936  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:55:38.147020  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:55:38.147098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:55:38.147210  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:55:38.147271  172785 kubeadm.go:310] 
	I0210 11:55:38.147280  172785 kubeadm.go:394] duration metric: took 8m2.242182664s to StartCluster
	I0210 11:55:38.147337  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:55:38.147399  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:55:38.190552  172785 cri.go:89] found id: ""
	I0210 11:55:38.190585  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.190593  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:55:38.190601  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:55:38.190653  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:55:38.223994  172785 cri.go:89] found id: ""
	I0210 11:55:38.224030  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.224041  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:55:38.224050  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:55:38.224114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:55:38.254975  172785 cri.go:89] found id: ""
	I0210 11:55:38.255002  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.255013  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:55:38.255021  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:55:38.255087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:55:38.294383  172785 cri.go:89] found id: ""
	I0210 11:55:38.294412  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.294423  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:55:38.294431  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:55:38.294481  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:55:38.330915  172785 cri.go:89] found id: ""
	I0210 11:55:38.330943  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.330952  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:55:38.330958  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:55:38.331013  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:55:38.368811  172785 cri.go:89] found id: ""
	I0210 11:55:38.368841  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.368849  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:55:38.368856  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:55:38.368912  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:55:38.405782  172785 cri.go:89] found id: ""
	I0210 11:55:38.405809  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.405817  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:55:38.405822  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:55:38.405878  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:55:38.443286  172785 cri.go:89] found id: ""
	I0210 11:55:38.443313  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.443320  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:55:38.443331  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:55:38.443344  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:55:38.457513  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:55:38.457552  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:55:38.535390  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:55:38.535413  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:55:38.535425  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:55:38.644609  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:55:38.644644  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:55:38.708870  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:55:38.708900  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:55:38.771312  172785 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 11:55:38.771377  172785 out.go:270] * 
	W0210 11:55:38.771437  172785 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.771456  172785 out.go:270] * 
	W0210 11:55:38.772241  172785 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:55:38.775175  172785 out.go:201] 
	W0210 11:55:38.776401  172785 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.776449  172785 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 11:55:38.776467  172785 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 11:55:38.777818  172785 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.225958966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189081225933094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42f209c6-2dc8-4ff3-909d-b5740670b61d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.226695003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32ac1c16-52c9-41ef-a845-27f351f79087 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.226759143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32ac1c16-52c9-41ef-a845-27f351f79087 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.226846187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=32ac1c16-52c9-41ef-a845-27f351f79087 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.255669439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54499312-1d48-4d3f-95c8-6108f70aede2 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.255760211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54499312-1d48-4d3f-95c8-6108f70aede2 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.259187146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41e3d563-643e-45b8-bb01-a02dd3034d0e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.259666679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189081259640204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41e3d563-643e-45b8-bb01-a02dd3034d0e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.260204383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25bb80a1-8bc0-42f7-aa66-aa416a5f738f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.260291552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25bb80a1-8bc0-42f7-aa66-aa416a5f738f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.260326145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=25bb80a1-8bc0-42f7-aa66-aa416a5f738f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.288448049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2ad3f63-929b-426b-a32c-115166fe9737 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.288530761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2ad3f63-929b-426b-a32c-115166fe9737 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.289332565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72c4be81-2d54-473f-a339-8e38e87d8988 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.289707797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189081289682508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72c4be81-2d54-473f-a339-8e38e87d8988 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.290103330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eaa25c3c-1fdf-4847-bc49-6829ca7afafd name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.290164446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eaa25c3c-1fdf-4847-bc49-6829ca7afafd name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.290199879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eaa25c3c-1fdf-4847-bc49-6829ca7afafd name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.318121221Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=868e9823-a476-4455-8b12-1a5253441d13 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.318208401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=868e9823-a476-4455-8b12-1a5253441d13 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.319453406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de0818ae-5ecd-4792-aeaa-05a8dca9eea3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.319879236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189081319859309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de0818ae-5ecd-4792-aeaa-05a8dca9eea3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.320348391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a19c813-f21c-46a2-9d36-9fbdb322d6ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.320410227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a19c813-f21c-46a2-9d36-9fbdb322d6ab name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:04:41 old-k8s-version-510006 crio[632]: time="2025-02-10 12:04:41.320441887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a19c813-f21c-46a2-9d36-9fbdb322d6ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054289] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039411] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.995296] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.082058] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.584320] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.340922] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.062802] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054806] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.152386] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.133625] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.265093] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.059229] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.067098] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.246980] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[ +12.002986] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 11:51] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Feb10 11:53] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.060734] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:04:41 up 17 min,  0 users,  load average: 0.09, 0.05, 0.04
	Linux old-k8s-version-510006 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000204960, 0xc0009b44e0, 0x23, 0xc0003b4cc0)
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: created by internal/singleflight.(*Group).DoChan
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: goroutine 166 [syscall]:
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: net._C2func_getaddrinfo(0xc000982480, 0x0, 0xc00076d0b0, 0xc0000102d0, 0x0, 0x0, 0x0)
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]:         _cgo_gotypes.go:94 +0x55
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: net.cgoLookupIPCNAME.func1(0xc000982480, 0x20, 0x20, 0xc00076d0b0, 0xc0000102d0, 0x4e4a5a0, 0xc0006b26a0, 0x57a492)
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc0009b44b0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: net.cgoIPLookup(0xc000182e40, 0x48ab5d6, 0x3, 0xc0009b44b0, 0x1f)
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]: created by net.cgoLookupIP
	Feb 10 12:04:38 old-k8s-version-510006 kubelet[6471]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Feb 10 12:04:38 old-k8s-version-510006 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 12:04:38 old-k8s-version-510006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 12:04:39 old-k8s-version-510006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 10 12:04:39 old-k8s-version-510006 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 12:04:39 old-k8s-version-510006 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 12:04:39 old-k8s-version-510006 kubelet[6480]: I0210 12:04:39.253676    6480 server.go:416] Version: v1.20.0
	Feb 10 12:04:39 old-k8s-version-510006 kubelet[6480]: I0210 12:04:39.254015    6480 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 12:04:39 old-k8s-version-510006 kubelet[6480]: I0210 12:04:39.258440    6480 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 12:04:39 old-k8s-version-510006 kubelet[6480]: W0210 12:04:39.259508    6480 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 10 12:04:39 old-k8s-version-510006 kubelet[6480]: I0210 12:04:39.259578    6480 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (227.74444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-510006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:05:04.561898  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:05:53.023142  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:06:02.063979  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:06:15.138449  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:07:44.325167  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/no-preload-484935/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:07:54.942882  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:08:22.013923  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/default-k8s-diff-port-448087/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:08:32.415703  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:09:06.276969  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:09:07.391503  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/no-preload-484935/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:09:16.621610  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:09:35.344148  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:09:45.081057  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/default-k8s-diff-port-448087/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
E0210 12:10:04.561950  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.244:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.244:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (238.538707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-510006" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-510006 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-510006 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.03µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-510006 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (220.34574ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-510006 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-413450 image list                          | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| delete  | -p embed-certs-413450                                  | embed-certs-413450           | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| start   | -p newest-cni-188461 --memory=2200 --alsologtostderr   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-484935 image list                           | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| delete  | -p no-preload-484935                                   | no-preload-484935            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	| addons  | enable metrics-server -p newest-cni-188461             | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-448087                           | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-448087 | jenkins | v1.35.0 | 10 Feb 25 11:50 UTC | 10 Feb 25 11:50 UTC |
	|         | default-k8s-diff-port-448087                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-188461                  | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-188461 --memory=2200 --alsologtostderr   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-188461 image list                           | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	| delete  | -p newest-cni-188461                                   | newest-cni-188461            | jenkins | v1.35.0 | 10 Feb 25 11:51 UTC | 10 Feb 25 11:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 11:51:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 11:51:05.820340  175432 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:51:05.820502  175432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:51:05.820516  175432 out.go:358] Setting ErrFile to fd 2...
	I0210 11:51:05.820523  175432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:51:05.820766  175432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:51:05.821523  175432 out.go:352] Setting JSON to false
	I0210 11:51:05.822831  175432 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9208,"bootTime":1739179058,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:51:05.822988  175432 start.go:139] virtualization: kvm guest
	I0210 11:51:05.825163  175432 out.go:177] * [newest-cni-188461] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:51:05.826457  175432 notify.go:220] Checking for updates...
	I0210 11:51:05.826494  175432 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:51:05.827767  175432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:51:05.828893  175432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:05.830154  175432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:51:05.831155  175432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:51:05.832181  175432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:51:05.833664  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:05.834109  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.834167  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.849261  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0210 11:51:05.849766  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.850430  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.850466  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.850929  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.851149  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.851442  175432 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:51:05.851738  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.851794  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.867715  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0210 11:51:05.868207  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.868793  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.868820  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.869239  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.869480  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.906409  175432 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 11:51:05.907615  175432 start.go:297] selected driver: kvm2
	I0210 11:51:05.907629  175432 start.go:901] validating driver "kvm2" against &{Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:05.907767  175432 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:51:05.908475  175432 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:51:05.908568  175432 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 11:51:05.924427  175432 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 11:51:05.924814  175432 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 11:51:05.924842  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:05.924873  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:05.924904  175432 start.go:340] cluster config:
	{Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:05.925004  175432 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:51:05.926563  175432 out.go:177] * Starting "newest-cni-188461" primary control-plane node in "newest-cni-188461" cluster
	I0210 11:51:05.927651  175432 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:51:05.927697  175432 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 11:51:05.927710  175432 cache.go:56] Caching tarball of preloaded images
	I0210 11:51:05.927792  175432 preload.go:172] Found /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 11:51:05.927808  175432 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 11:51:05.927910  175432 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/config.json ...
	I0210 11:51:05.928134  175432 start.go:360] acquireMachinesLock for newest-cni-188461: {Name:mke6c3a615c5915495f0682c0833d8830c2c1004 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:51:05.928183  175432 start.go:364] duration metric: took 27.306µs to acquireMachinesLock for "newest-cni-188461"
	I0210 11:51:05.928204  175432 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:51:05.928212  175432 fix.go:54] fixHost starting: 
	I0210 11:51:05.928550  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:05.928590  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:05.944316  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0210 11:51:05.944759  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:05.945287  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:05.945316  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:05.945647  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:05.945896  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:05.946092  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:05.947956  175432 fix.go:112] recreateIfNeeded on newest-cni-188461: state=Stopped err=<nil>
	I0210 11:51:05.948006  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	W0210 11:51:05.948163  175432 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:51:05.950073  175432 out.go:177] * Restarting existing kvm2 VM for "newest-cni-188461" ...
	I0210 11:51:02.699759  172785 cri.go:89] found id: ""
	I0210 11:51:02.699826  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.699843  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:02.699853  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:02.699915  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:02.735317  172785 cri.go:89] found id: ""
	I0210 11:51:02.735346  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.735354  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:02.735360  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:02.735410  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:02.765670  172785 cri.go:89] found id: ""
	I0210 11:51:02.765697  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.765704  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:02.765710  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:02.765759  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:02.797404  172785 cri.go:89] found id: ""
	I0210 11:51:02.797435  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.797448  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:02.797456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:02.797515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:02.829414  172785 cri.go:89] found id: ""
	I0210 11:51:02.829448  172785 logs.go:282] 0 containers: []
	W0210 11:51:02.829459  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:02.829471  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:02.829487  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:02.880066  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:02.880105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:02.893239  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:02.893274  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:02.971736  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:02.971766  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:02.971782  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:03.046928  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:03.046967  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.590932  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:05.604033  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:05.604091  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:05.640343  172785 cri.go:89] found id: ""
	I0210 11:51:05.640374  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.640383  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:05.640391  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:05.640441  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:05.676294  172785 cri.go:89] found id: ""
	I0210 11:51:05.676319  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.676326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:05.676331  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:05.676371  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:05.708986  172785 cri.go:89] found id: ""
	I0210 11:51:05.709016  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.709026  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:05.709034  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:05.709087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:05.741689  172785 cri.go:89] found id: ""
	I0210 11:51:05.741714  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.741722  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:05.741728  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:05.741769  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:05.774470  172785 cri.go:89] found id: ""
	I0210 11:51:05.774496  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.774506  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:05.774514  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:05.774571  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:05.806632  172785 cri.go:89] found id: ""
	I0210 11:51:05.806659  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.806669  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:05.806676  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:05.806725  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:05.849963  172785 cri.go:89] found id: ""
	I0210 11:51:05.849987  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.850001  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:05.850012  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:05.850068  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:05.888840  172785 cri.go:89] found id: ""
	I0210 11:51:05.888870  172785 logs.go:282] 0 containers: []
	W0210 11:51:05.888880  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:05.888893  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:05.888907  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:05.930082  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:05.930105  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:05.985122  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:05.985156  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:06.000022  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:06.000051  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:06.080268  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:06.080290  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:06.080305  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:05.951396  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Start
	I0210 11:51:05.951587  175432 main.go:141] libmachine: (newest-cni-188461) starting domain...
	I0210 11:51:05.951605  175432 main.go:141] libmachine: (newest-cni-188461) ensuring networks are active...
	I0210 11:51:05.952431  175432 main.go:141] libmachine: (newest-cni-188461) Ensuring network default is active
	I0210 11:51:05.952804  175432 main.go:141] libmachine: (newest-cni-188461) Ensuring network mk-newest-cni-188461 is active
	I0210 11:51:05.953275  175432 main.go:141] libmachine: (newest-cni-188461) getting domain XML...
	I0210 11:51:05.954033  175432 main.go:141] libmachine: (newest-cni-188461) creating domain...
	I0210 11:51:07.158707  175432 main.go:141] libmachine: (newest-cni-188461) waiting for IP...
	I0210 11:51:07.159498  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.159846  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.159937  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.159839  175468 retry.go:31] will retry after 306.733597ms: waiting for domain to come up
	I0210 11:51:07.468485  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.468938  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.468960  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.468906  175468 retry.go:31] will retry after 340.921152ms: waiting for domain to come up
	I0210 11:51:07.811449  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:07.811899  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:07.811930  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:07.811856  175468 retry.go:31] will retry after 454.621787ms: waiting for domain to come up
	I0210 11:51:08.268622  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:08.269162  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:08.269193  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:08.269129  175468 retry.go:31] will retry after 544.066974ms: waiting for domain to come up
	I0210 11:51:08.815072  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:08.815779  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:08.815813  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:08.815728  175468 retry.go:31] will retry after 715.223482ms: waiting for domain to come up
	I0210 11:51:09.532634  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:09.533080  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:09.533105  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:09.533047  175468 retry.go:31] will retry after 919.550163ms: waiting for domain to come up
	I0210 11:51:10.453662  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:10.454148  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:10.454184  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:10.454112  175468 retry.go:31] will retry after 1.132151714s: waiting for domain to come up
	I0210 11:51:08.668417  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:08.681333  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:08.681391  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:08.716394  172785 cri.go:89] found id: ""
	I0210 11:51:08.716427  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.716435  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:08.716442  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:08.716492  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:08.752135  172785 cri.go:89] found id: ""
	I0210 11:51:08.752161  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.752170  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:08.752175  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:08.752222  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:08.785404  172785 cri.go:89] found id: ""
	I0210 11:51:08.785430  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.785438  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:08.785443  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:08.785506  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:08.816938  172785 cri.go:89] found id: ""
	I0210 11:51:08.816965  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.816977  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:08.816986  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:08.817078  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:08.850791  172785 cri.go:89] found id: ""
	I0210 11:51:08.850827  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.850838  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:08.850847  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:08.850905  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:08.887566  172785 cri.go:89] found id: ""
	I0210 11:51:08.887602  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.887615  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:08.887623  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:08.887686  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:08.921347  172785 cri.go:89] found id: ""
	I0210 11:51:08.921389  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.921397  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:08.921404  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:08.921462  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:08.954704  172785 cri.go:89] found id: ""
	I0210 11:51:08.954738  172785 logs.go:282] 0 containers: []
	W0210 11:51:08.954750  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:08.954762  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:08.954777  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:09.004897  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:09.004932  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:09.020413  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:09.020440  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:09.093835  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:09.093861  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:09.093874  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:09.174312  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:09.174355  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:11.710924  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:11.722908  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:11.722976  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:11.756702  172785 cri.go:89] found id: ""
	I0210 11:51:11.756744  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.756757  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:11.756765  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:11.756839  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:11.787281  172785 cri.go:89] found id: ""
	I0210 11:51:11.787315  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.787326  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:11.787334  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:11.787407  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:11.817416  172785 cri.go:89] found id: ""
	I0210 11:51:11.817443  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.817451  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:11.817456  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:11.817508  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:11.847209  172785 cri.go:89] found id: ""
	I0210 11:51:11.847241  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.847253  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:11.847260  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:11.847326  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:11.883365  172785 cri.go:89] found id: ""
	I0210 11:51:11.883395  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.883403  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:11.883408  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:11.883457  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:11.919812  172785 cri.go:89] found id: ""
	I0210 11:51:11.919840  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.919847  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:11.919854  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:11.919901  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:11.961310  172785 cri.go:89] found id: ""
	I0210 11:51:11.961348  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.961359  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:11.961366  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:11.961443  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:11.999667  172785 cri.go:89] found id: ""
	I0210 11:51:11.999701  172785 logs.go:282] 0 containers: []
	W0210 11:51:11.999709  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:11.999718  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:11.999730  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:12.049284  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:12.049320  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:12.062044  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:12.062073  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:12.126307  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:12.126334  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:12.126351  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:12.215334  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:12.215382  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:11.587837  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:11.588448  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:11.588474  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:11.588419  175468 retry.go:31] will retry after 1.04294927s: waiting for domain to come up
	I0210 11:51:12.632697  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:12.633143  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:12.633181  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:12.633127  175468 retry.go:31] will retry after 1.81651321s: waiting for domain to come up
	I0210 11:51:14.452121  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:14.452630  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:14.452696  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:14.452603  175468 retry.go:31] will retry after 2.010851888s: waiting for domain to come up
	I0210 11:51:14.752711  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:14.765091  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:14.765158  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:14.796318  172785 cri.go:89] found id: ""
	I0210 11:51:14.796352  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.796362  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:14.796371  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:14.796438  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:14.826452  172785 cri.go:89] found id: ""
	I0210 11:51:14.826484  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.826493  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:14.826501  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:14.826566  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:14.859861  172785 cri.go:89] found id: ""
	I0210 11:51:14.859890  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.859898  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:14.859904  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:14.859965  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:14.893708  172785 cri.go:89] found id: ""
	I0210 11:51:14.893740  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.893748  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:14.893755  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:14.893820  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:14.925870  172785 cri.go:89] found id: ""
	I0210 11:51:14.925897  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.925905  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:14.925911  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:14.925977  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:14.960528  172785 cri.go:89] found id: ""
	I0210 11:51:14.960554  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.960562  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:14.960567  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:14.960630  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:14.992831  172785 cri.go:89] found id: ""
	I0210 11:51:14.992859  172785 logs.go:282] 0 containers: []
	W0210 11:51:14.992867  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:14.992874  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:14.992934  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:15.026146  172785 cri.go:89] found id: ""
	I0210 11:51:15.026182  172785 logs.go:282] 0 containers: []
	W0210 11:51:15.026193  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:15.026203  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:15.026217  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:15.074502  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:15.074537  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:15.087671  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:15.087713  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:15.152959  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:15.152984  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:15.153000  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:15.225042  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:15.225082  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:16.465454  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:16.465905  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:16.465953  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:16.465902  175468 retry.go:31] will retry after 2.06317351s: waiting for domain to come up
	I0210 11:51:18.530291  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:18.530745  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:18.530777  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:18.530719  175468 retry.go:31] will retry after 3.12374249s: waiting for domain to come up
	I0210 11:51:17.763634  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:17.776970  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:17.777038  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:17.810704  172785 cri.go:89] found id: ""
	I0210 11:51:17.810736  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.810747  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:17.810755  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:17.810814  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:17.845216  172785 cri.go:89] found id: ""
	I0210 11:51:17.845242  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.845251  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:17.845257  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:17.845316  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:17.877621  172785 cri.go:89] found id: ""
	I0210 11:51:17.877652  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.877668  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:17.877675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:17.877737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:17.908704  172785 cri.go:89] found id: ""
	I0210 11:51:17.908730  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.908739  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:17.908744  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:17.908792  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:17.943857  172785 cri.go:89] found id: ""
	I0210 11:51:17.943887  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.943896  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:17.943902  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:17.943952  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:17.974965  172785 cri.go:89] found id: ""
	I0210 11:51:17.974998  172785 logs.go:282] 0 containers: []
	W0210 11:51:17.975010  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:17.975018  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:17.975085  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:18.006248  172785 cri.go:89] found id: ""
	I0210 11:51:18.006282  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.006292  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:18.006300  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:18.006360  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:18.036899  172785 cri.go:89] found id: ""
	I0210 11:51:18.036943  172785 logs.go:282] 0 containers: []
	W0210 11:51:18.036954  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:18.036967  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:18.036982  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:18.049026  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:18.049054  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:18.111425  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:18.111452  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:18.111464  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:18.185158  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:18.185198  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:18.220425  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:18.220458  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:20.771952  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:20.784242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:20.784303  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:20.815676  172785 cri.go:89] found id: ""
	I0210 11:51:20.815702  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.815709  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:20.815715  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:20.815773  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:20.845540  172785 cri.go:89] found id: ""
	I0210 11:51:20.845573  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.845583  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:20.845592  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:20.845654  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:20.875046  172785 cri.go:89] found id: ""
	I0210 11:51:20.875077  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.875086  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:20.875092  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:20.875150  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:20.905636  172785 cri.go:89] found id: ""
	I0210 11:51:20.905662  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.905670  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:20.905675  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:20.905722  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:20.935907  172785 cri.go:89] found id: ""
	I0210 11:51:20.935938  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.935948  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:20.935955  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:20.936028  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:20.965345  172785 cri.go:89] found id: ""
	I0210 11:51:20.965375  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.965386  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:20.965395  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:20.965464  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:20.995608  172785 cri.go:89] found id: ""
	I0210 11:51:20.995637  172785 logs.go:282] 0 containers: []
	W0210 11:51:20.995646  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:20.995651  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:20.995712  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:21.025886  172785 cri.go:89] found id: ""
	I0210 11:51:21.025914  172785 logs.go:282] 0 containers: []
	W0210 11:51:21.025923  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:21.025932  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:21.025946  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:21.074578  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:21.074617  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:21.087795  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:21.087825  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:21.151479  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:21.151505  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:21.151520  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:21.228563  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:21.228613  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:21.655587  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:21.656261  175432 main.go:141] libmachine: (newest-cni-188461) DBG | unable to find current IP address of domain newest-cni-188461 in network mk-newest-cni-188461
	I0210 11:51:21.656284  175432 main.go:141] libmachine: (newest-cni-188461) DBG | I0210 11:51:21.655989  175468 retry.go:31] will retry after 4.241425857s: waiting for domain to come up
	I0210 11:51:23.769730  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:23.781806  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:23.781877  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:23.812884  172785 cri.go:89] found id: ""
	I0210 11:51:23.812912  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.812920  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:23.812926  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:23.812975  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:23.844665  172785 cri.go:89] found id: ""
	I0210 11:51:23.844700  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.844708  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:23.844713  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:23.844764  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:23.879613  172785 cri.go:89] found id: ""
	I0210 11:51:23.879642  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.879651  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:23.879657  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:23.879711  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:23.911425  172785 cri.go:89] found id: ""
	I0210 11:51:23.911452  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.911459  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:23.911465  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:23.911515  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:23.944567  172785 cri.go:89] found id: ""
	I0210 11:51:23.944601  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.944610  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:23.944617  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:23.944669  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:23.974980  172785 cri.go:89] found id: ""
	I0210 11:51:23.975008  172785 logs.go:282] 0 containers: []
	W0210 11:51:23.975016  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:23.975022  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:23.975074  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:24.006450  172785 cri.go:89] found id: ""
	I0210 11:51:24.006484  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.006492  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:24.006499  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:24.006563  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:24.037483  172785 cri.go:89] found id: ""
	I0210 11:51:24.037521  172785 logs.go:282] 0 containers: []
	W0210 11:51:24.037533  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:24.037545  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:24.037560  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:24.049887  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:24.049921  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:24.117589  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:24.117615  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:24.117628  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:24.193737  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:24.193775  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:24.230256  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:24.230287  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:26.780045  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:26.792355  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:26.792446  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:26.826505  172785 cri.go:89] found id: ""
	I0210 11:51:26.826536  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.826544  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:26.826550  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:26.826601  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:26.865128  172785 cri.go:89] found id: ""
	I0210 11:51:26.865172  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.865185  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:26.865193  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:26.865259  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:26.897605  172785 cri.go:89] found id: ""
	I0210 11:51:26.897636  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.897644  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:26.897650  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:26.897699  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:26.930033  172785 cri.go:89] found id: ""
	I0210 11:51:26.930067  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.930079  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:26.930089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:26.930151  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:26.963458  172785 cri.go:89] found id: ""
	I0210 11:51:26.963497  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.963509  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:26.963519  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:26.963586  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:26.993022  172785 cri.go:89] found id: ""
	I0210 11:51:26.993051  172785 logs.go:282] 0 containers: []
	W0210 11:51:26.993058  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:26.993065  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:26.993114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:27.029713  172785 cri.go:89] found id: ""
	I0210 11:51:27.029756  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.029768  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:27.029776  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:27.029838  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:27.065917  172785 cri.go:89] found id: ""
	I0210 11:51:27.065952  172785 logs.go:282] 0 containers: []
	W0210 11:51:27.065962  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:27.065976  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:27.065988  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:27.127397  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:27.127435  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:27.140024  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:27.140055  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:27.218604  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:27.218625  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:27.218639  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:27.293606  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:27.293645  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:25.902358  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.902836  175432 main.go:141] libmachine: (newest-cni-188461) found domain IP: 192.168.39.24
	I0210 11:51:25.902861  175432 main.go:141] libmachine: (newest-cni-188461) reserving static IP address...
	I0210 11:51:25.902877  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has current primary IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.903373  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "newest-cni-188461", mac: "52:54:00:25:fb:1e", ip: "192.168.39.24"} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:25.903414  175432 main.go:141] libmachine: (newest-cni-188461) DBG | skip adding static IP to network mk-newest-cni-188461 - found existing host DHCP lease matching {name: "newest-cni-188461", mac: "52:54:00:25:fb:1e", ip: "192.168.39.24"}
	I0210 11:51:25.903432  175432 main.go:141] libmachine: (newest-cni-188461) reserved static IP address 192.168.39.24 for domain newest-cni-188461
	I0210 11:51:25.903450  175432 main.go:141] libmachine: (newest-cni-188461) waiting for SSH...
	I0210 11:51:25.903464  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Getting to WaitForSSH function...
	I0210 11:51:25.905574  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.905915  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:25.905949  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:25.906037  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Using SSH client type: external
	I0210 11:51:25.906082  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Using SSH private key: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa (-rw-------)
	I0210 11:51:25.906117  175432 main.go:141] libmachine: (newest-cni-188461) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 11:51:25.906133  175432 main.go:141] libmachine: (newest-cni-188461) DBG | About to run SSH command:
	I0210 11:51:25.906142  175432 main.go:141] libmachine: (newest-cni-188461) DBG | exit 0
	I0210 11:51:26.026989  175432 main.go:141] libmachine: (newest-cni-188461) DBG | SSH cmd err, output: <nil>: 
	I0210 11:51:26.027395  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetConfigRaw
	I0210 11:51:26.028030  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:26.030814  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.031285  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.031323  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.031552  175432 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/config.json ...
	I0210 11:51:26.031826  175432 machine.go:93] provisionDockerMachine start ...
	I0210 11:51:26.031852  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:26.032077  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.034420  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.034744  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.034774  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.034906  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.035078  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.035233  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.035365  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.035514  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.035757  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.035775  175432 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:51:26.135247  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:51:26.135280  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.135565  175432 buildroot.go:166] provisioning hostname "newest-cni-188461"
	I0210 11:51:26.135601  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.135800  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.138386  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.138722  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.138760  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.138918  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.139103  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.139257  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.139396  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.139525  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.139740  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.139760  175432 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-188461 && echo "newest-cni-188461" | sudo tee /etc/hostname
	I0210 11:51:26.252653  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-188461
	
	I0210 11:51:26.252681  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.255333  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.255649  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.255683  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.255832  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.256043  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.256209  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.256316  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.256451  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.256607  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.256621  175432 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-188461' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-188461/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-188461' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:51:26.367365  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:51:26.367412  175432 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20385-109271/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-109271/.minikube}
	I0210 11:51:26.367489  175432 buildroot.go:174] setting up certificates
	I0210 11:51:26.367512  175432 provision.go:84] configureAuth start
	I0210 11:51:26.367534  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetMachineName
	I0210 11:51:26.367839  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:26.370685  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.371061  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.371093  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.371229  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.373420  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.373836  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.373880  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.373983  175432 provision.go:143] copyHostCerts
	I0210 11:51:26.374051  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem, removing ...
	I0210 11:51:26.374065  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem
	I0210 11:51:26.374133  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/ca.pem (1078 bytes)
	I0210 11:51:26.374276  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem, removing ...
	I0210 11:51:26.374287  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem
	I0210 11:51:26.374313  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/cert.pem (1123 bytes)
	I0210 11:51:26.374367  175432 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem, removing ...
	I0210 11:51:26.374375  175432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem
	I0210 11:51:26.374397  175432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-109271/.minikube/key.pem (1679 bytes)
	I0210 11:51:26.374449  175432 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem org=jenkins.newest-cni-188461 san=[127.0.0.1 192.168.39.24 localhost minikube newest-cni-188461]
	I0210 11:51:26.560219  175432 provision.go:177] copyRemoteCerts
	I0210 11:51:26.560295  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:51:26.560322  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.562789  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.563081  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.563110  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.563305  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.563539  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.563695  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.563849  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:26.644785  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:51:26.666689  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0210 11:51:26.688226  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:51:26.709285  175432 provision.go:87] duration metric: took 341.756699ms to configureAuth
	I0210 11:51:26.709309  175432 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:51:26.709474  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:26.709553  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.712093  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.712454  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.712485  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.712651  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.712862  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.713012  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.713160  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.713286  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:26.713469  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:26.713490  175432 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 11:51:26.936519  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 11:51:26.936549  175432 machine.go:96] duration metric: took 904.704645ms to provisionDockerMachine
	I0210 11:51:26.936563  175432 start.go:293] postStartSetup for "newest-cni-188461" (driver="kvm2")
	I0210 11:51:26.936577  175432 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:51:26.936604  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:26.936940  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:51:26.936977  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:26.939826  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.940192  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:26.940237  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:26.940341  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:26.940583  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:26.940763  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:26.940960  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.026462  175432 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:51:27.031688  175432 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:51:27.031709  175432 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/addons for local assets ...
	I0210 11:51:27.031773  175432 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-109271/.minikube/files for local assets ...
	I0210 11:51:27.031842  175432 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem -> 1164702.pem in /etc/ssl/certs
	I0210 11:51:27.031934  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:51:27.044721  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:51:27.074068  175432 start.go:296] duration metric: took 137.488029ms for postStartSetup
	I0210 11:51:27.074125  175432 fix.go:56] duration metric: took 21.145913922s for fixHost
	I0210 11:51:27.074147  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.077156  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.077642  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.077674  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.077899  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.078079  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.078248  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.078349  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.078477  175432 main.go:141] libmachine: Using SSH client type: native
	I0210 11:51:27.078645  175432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0210 11:51:27.078655  175432 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:51:27.189002  175432 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739188287.148629499
	
	I0210 11:51:27.189035  175432 fix.go:216] guest clock: 1739188287.148629499
	I0210 11:51:27.189046  175432 fix.go:229] Guest: 2025-02-10 11:51:27.148629499 +0000 UTC Remote: 2025-02-10 11:51:27.074130149 +0000 UTC m=+21.295255642 (delta=74.49935ms)
	I0210 11:51:27.189075  175432 fix.go:200] guest clock delta is within tolerance: 74.49935ms
	I0210 11:51:27.189098  175432 start.go:83] releasing machines lock for "newest-cni-188461", held for 21.260901149s
	I0210 11:51:27.189149  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.189435  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:27.192197  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.192662  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.192691  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.192835  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193427  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193607  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:27.193731  175432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:51:27.193784  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.193815  175432 ssh_runner.go:195] Run: cat /version.json
	I0210 11:51:27.193843  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:27.196421  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.196581  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.196952  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.196982  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.197011  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:27.197027  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:27.197119  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.197229  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:27.197348  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.197432  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:27.197512  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.197578  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:27.197673  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.197762  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:27.309501  175432 ssh_runner.go:195] Run: systemctl --version
	I0210 11:51:27.315451  175432 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 11:51:27.461369  175432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 11:51:27.467018  175432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:51:27.467094  175432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:51:27.482133  175432 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:51:27.482163  175432 start.go:495] detecting cgroup driver to use...
	I0210 11:51:27.482234  175432 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:51:27.497192  175432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:51:27.510105  175432 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:51:27.510161  175432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:51:27.523916  175432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:51:27.537043  175432 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:51:27.652244  175432 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:51:27.798511  175432 docker.go:233] disabling docker service ...
	I0210 11:51:27.798592  175432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:51:27.812301  175432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:51:27.824217  175432 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:51:27.953601  175432 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:51:28.082863  175432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:51:28.095446  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:51:28.111945  175432 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 11:51:28.112013  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.121412  175432 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 11:51:28.121479  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.130512  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.139646  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.148613  175432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:51:28.157806  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.166775  175432 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.181698  175432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 11:51:28.190623  175432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:51:28.198803  175432 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:51:28.198866  175432 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:51:28.210820  175432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:51:28.219005  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:28.334861  175432 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 11:51:28.416349  175432 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 11:51:28.416439  175432 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 11:51:28.421694  175432 start.go:563] Will wait 60s for crictl version
	I0210 11:51:28.421766  175432 ssh_runner.go:195] Run: which crictl
	I0210 11:51:28.425209  175432 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:51:28.469947  175432 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 11:51:28.470045  175432 ssh_runner.go:195] Run: crio --version
	I0210 11:51:28.501926  175432 ssh_runner.go:195] Run: crio --version
	I0210 11:51:28.529983  175432 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 11:51:28.531238  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetIP
	I0210 11:51:28.534202  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:28.534482  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:28.534503  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:28.534753  175432 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 11:51:28.538726  175432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:51:28.552133  175432 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0210 11:51:28.553249  175432 kubeadm.go:883] updating cluster {Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:51:28.553380  175432 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 11:51:28.553432  175432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:51:28.586300  175432 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 11:51:28.586363  175432 ssh_runner.go:195] Run: which lz4
	I0210 11:51:28.589827  175432 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:51:28.593533  175432 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:51:28.593560  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 11:51:29.799950  175432 crio.go:462] duration metric: took 1.21014347s to copy over tarball
	I0210 11:51:29.800045  175432 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:51:29.829516  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:29.841844  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:29.841926  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:29.877623  172785 cri.go:89] found id: ""
	I0210 11:51:29.877659  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.877671  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:29.877681  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:29.877755  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:29.917643  172785 cri.go:89] found id: ""
	I0210 11:51:29.917675  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.917687  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:29.917695  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:29.917761  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:29.963649  172785 cri.go:89] found id: ""
	I0210 11:51:29.963674  172785 logs.go:282] 0 containers: []
	W0210 11:51:29.963682  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:29.963687  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:29.963737  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:30.002084  172785 cri.go:89] found id: ""
	I0210 11:51:30.002113  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.002123  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:30.002131  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:30.002195  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:30.033435  172785 cri.go:89] found id: ""
	I0210 11:51:30.033462  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.033470  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:30.033476  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:30.033527  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:30.066494  172785 cri.go:89] found id: ""
	I0210 11:51:30.066531  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.066544  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:30.066553  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:30.066631  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:30.106190  172785 cri.go:89] found id: ""
	I0210 11:51:30.106224  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.106235  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:30.106242  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:30.106307  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:30.138747  172785 cri.go:89] found id: ""
	I0210 11:51:30.138783  172785 logs.go:282] 0 containers: []
	W0210 11:51:30.138794  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:30.138806  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:30.138821  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:30.186179  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:30.186214  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:30.239040  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:30.239098  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:30.251790  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:30.251833  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:30.331476  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:30.331510  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:30.331526  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:31.868684  175432 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068598843s)
	I0210 11:51:31.868722  175432 crio.go:469] duration metric: took 2.068733654s to extract the tarball
	I0210 11:51:31.868734  175432 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:51:31.905043  175432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:51:31.949467  175432 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 11:51:31.949495  175432 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:51:31.949506  175432 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.32.1 crio true true} ...
	I0210 11:51:31.949635  175432 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-188461 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:51:31.949725  175432 ssh_runner.go:195] Run: crio config
	I0210 11:51:31.995118  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:31.995138  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:31.995148  175432 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0210 11:51:31.995171  175432 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-188461 NodeName:newest-cni-188461 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:51:31.995327  175432 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-188461"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:51:31.995401  175432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:51:32.004538  175432 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:51:32.004595  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:51:32.013199  175432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0210 11:51:32.028077  175432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:51:32.042573  175432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0210 11:51:32.058002  175432 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0210 11:51:32.061432  175432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:51:32.072627  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:32.186846  175432 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:51:32.202515  175432 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461 for IP: 192.168.39.24
	I0210 11:51:32.202534  175432 certs.go:194] generating shared ca certs ...
	I0210 11:51:32.202551  175432 certs.go:226] acquiring lock for ca certs: {Name:mk41def3593b0ff6effd099cf80de2e0c576c931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:32.202707  175432 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key
	I0210 11:51:32.202751  175432 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key
	I0210 11:51:32.202760  175432 certs.go:256] generating profile certs ...
	I0210 11:51:32.202851  175432 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/client.key
	I0210 11:51:32.202927  175432 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.key.972ab71d
	I0210 11:51:32.202971  175432 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.key
	I0210 11:51:32.203107  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem (1338 bytes)
	W0210 11:51:32.203160  175432 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470_empty.pem, impossibly tiny 0 bytes
	I0210 11:51:32.203176  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:51:32.203230  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:51:32.203260  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:51:32.203292  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/certs/key.pem (1679 bytes)
	I0210 11:51:32.203349  175432 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem (1708 bytes)
	I0210 11:51:32.203967  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:51:32.237448  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0210 11:51:32.265671  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:51:32.300282  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 11:51:32.321803  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 11:51:32.356159  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:51:32.384387  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:51:32.405311  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/newest-cni-188461/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:51:32.426731  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/certs/116470.pem --> /usr/share/ca-certificates/116470.pem (1338 bytes)
	I0210 11:51:32.447878  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/ssl/certs/1164702.pem --> /usr/share/ca-certificates/1164702.pem (1708 bytes)
	I0210 11:51:32.468769  175432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:51:32.489529  175432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:51:32.504167  175432 ssh_runner.go:195] Run: openssl version
	I0210 11:51:32.509508  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116470.pem && ln -fs /usr/share/ca-certificates/116470.pem /etc/ssl/certs/116470.pem"
	I0210 11:51:32.518871  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.522876  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:41 /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.522932  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116470.pem
	I0210 11:51:32.528142  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/116470.pem /etc/ssl/certs/51391683.0"
	I0210 11:51:32.537270  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1164702.pem && ln -fs /usr/share/ca-certificates/1164702.pem /etc/ssl/certs/1164702.pem"
	I0210 11:51:32.546522  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.550499  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:41 /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.550547  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1164702.pem
	I0210 11:51:32.555659  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1164702.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:51:32.564881  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:51:32.574099  175432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.578092  175432 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.578136  175432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:51:32.583164  175432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:51:32.592213  175432 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:51:32.596194  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 11:51:32.601754  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 11:51:32.607136  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 11:51:32.612639  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 11:51:32.617866  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 11:51:32.623168  175432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 11:51:32.628580  175432 kubeadm.go:392] StartCluster: {Name:newest-cni-188461 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-188461 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:51:32.628663  175432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 11:51:32.628718  175432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:51:32.662324  175432 cri.go:89] found id: ""
	I0210 11:51:32.662406  175432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:51:32.671458  175432 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 11:51:32.671474  175432 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 11:51:32.671515  175432 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 11:51:32.680246  175432 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:51:32.680805  175432 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-188461" does not appear in /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:32.681030  175432 kubeconfig.go:62] /home/jenkins/minikube-integration/20385-109271/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-188461" cluster setting kubeconfig missing "newest-cni-188461" context setting]
	I0210 11:51:32.681433  175432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:32.682590  175432 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 11:51:32.690876  175432 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.24
	I0210 11:51:32.690920  175432 kubeadm.go:1160] stopping kube-system containers ...
	I0210 11:51:32.690932  175432 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 11:51:32.690971  175432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:51:32.722678  175432 cri.go:89] found id: ""
	I0210 11:51:32.722734  175432 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 11:51:32.737166  175432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:51:32.745716  175432 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:51:32.745735  175432 kubeadm.go:157] found existing configuration files:
	
	I0210 11:51:32.745774  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:51:32.753706  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:51:32.753748  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:51:32.761921  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:51:32.769684  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:51:32.769733  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:51:32.778027  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:51:32.785678  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:51:32.785720  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:51:32.793869  175432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:51:32.801704  175432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:51:32.801745  175432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:51:32.809777  175432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:51:32.817865  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:32.922655  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:33.799309  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.003678  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.061490  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:34.141205  175432 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:51:34.141278  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:34.641870  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.142005  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.641428  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:32.918871  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:32.932814  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:32.932871  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:32.968103  172785 cri.go:89] found id: ""
	I0210 11:51:32.968136  172785 logs.go:282] 0 containers: []
	W0210 11:51:32.968148  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:32.968155  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:32.968218  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:33.004341  172785 cri.go:89] found id: ""
	I0210 11:51:33.004373  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.004388  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:33.004395  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:33.004448  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:33.042028  172785 cri.go:89] found id: ""
	I0210 11:51:33.042063  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.042075  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:33.042083  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:33.042146  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:33.078050  172785 cri.go:89] found id: ""
	I0210 11:51:33.078075  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.078083  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:33.078089  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:33.078138  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:33.114525  172785 cri.go:89] found id: ""
	I0210 11:51:33.114557  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.114566  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:33.114572  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:33.114642  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:33.149333  172785 cri.go:89] found id: ""
	I0210 11:51:33.149360  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.149368  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:33.149374  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:33.149442  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:33.180356  172785 cri.go:89] found id: ""
	I0210 11:51:33.180391  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.180399  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:33.180414  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:33.180466  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:33.216587  172785 cri.go:89] found id: ""
	I0210 11:51:33.216623  172785 logs.go:282] 0 containers: []
	W0210 11:51:33.216634  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:33.216647  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:33.216663  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:33.249169  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:33.249202  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:33.298276  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:33.298313  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:33.310872  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:33.310898  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:33.383025  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:33.383053  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:33.383070  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:35.956363  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:35.968886  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:51:35.968960  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:51:36.000870  172785 cri.go:89] found id: ""
	I0210 11:51:36.000902  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.000911  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:51:36.000919  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:51:36.000969  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:51:36.034456  172785 cri.go:89] found id: ""
	I0210 11:51:36.034489  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.034501  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:51:36.034509  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:51:36.034573  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:51:36.076207  172785 cri.go:89] found id: ""
	I0210 11:51:36.076238  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.076250  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:51:36.076258  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:51:36.076323  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:51:36.123438  172785 cri.go:89] found id: ""
	I0210 11:51:36.123474  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.123485  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:51:36.123494  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:51:36.123561  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:51:36.157858  172785 cri.go:89] found id: ""
	I0210 11:51:36.157897  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.157909  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:51:36.157918  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:51:36.157986  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:51:36.195990  172785 cri.go:89] found id: ""
	I0210 11:51:36.196024  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.196035  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:51:36.196044  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:51:36.196110  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:51:36.229709  172785 cri.go:89] found id: ""
	I0210 11:51:36.229742  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.229754  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:51:36.229762  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:51:36.229828  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:51:36.263497  172785 cri.go:89] found id: ""
	I0210 11:51:36.263530  172785 logs.go:282] 0 containers: []
	W0210 11:51:36.263544  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:51:36.263557  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:51:36.263575  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 11:51:36.323038  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:51:36.323075  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:51:36.339537  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:51:36.339565  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:51:36.415073  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:51:36.415103  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:51:36.415118  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:51:36.496333  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:51:36.496388  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:51:36.142283  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:36.642276  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:36.656745  175432 api_server.go:72] duration metric: took 2.515536249s to wait for apiserver process to appear ...
	I0210 11:51:36.656777  175432 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:51:36.656802  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.394390  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:51:39.394421  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:51:39.394436  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.437828  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 11:51:39.437873  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 11:51:39.657293  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:39.664873  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:51:39.664898  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:51:40.157233  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:40.162450  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 11:51:40.162480  175432 api_server.go:103] status: https://192.168.39.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 11:51:40.657079  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:40.662355  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0210 11:51:40.672632  175432 api_server.go:141] control plane version: v1.32.1
	I0210 11:51:40.672663  175432 api_server.go:131] duration metric: took 4.015877097s to wait for apiserver health ...
	I0210 11:51:40.672674  175432 cni.go:84] Creating CNI manager for ""
	I0210 11:51:40.672682  175432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 11:51:40.674230  175432 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 11:51:40.675515  175432 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 11:51:40.714574  175432 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 11:51:40.761839  175432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:51:40.766154  175432 system_pods.go:59] 8 kube-system pods found
	I0210 11:51:40.766198  175432 system_pods.go:61] "coredns-668d6bf9bc-s8bdj" [b89cbee2-a27d-4c8e-950c-b9bb794dca2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 11:51:40.766211  175432 system_pods.go:61] "etcd-newest-cni-188461" [d3f5135e-dc27-4326-8b51-9273547f4ead] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 11:51:40.766222  175432 system_pods.go:61] "kube-apiserver-newest-cni-188461" [b2b151b6-34c2-45f9-b052-4978e1d4c4e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 11:51:40.766233  175432 system_pods.go:61] "kube-controller-manager-newest-cni-188461" [7c5ff0ac-2dd6-4de0-8533-de9235d7ecee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 11:51:40.766246  175432 system_pods.go:61] "kube-proxy-hnd7c" [211dd9a1-4677-4b30-a805-8c44aa78929a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 11:51:40.766259  175432 system_pods.go:61] "kube-scheduler-newest-cni-188461" [65a9946b-d333-4dca-8047-6243b2233902] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 11:51:40.766269  175432 system_pods.go:61] "metrics-server-f79f97bbb-bfqgl" [994d3cd1-03a9-4bc6-9d1f-726efac9bf56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 11:51:40.766285  175432 system_pods.go:61] "storage-provisioner" [ae729534-6a0a-45a8-82ab-cfcb49ba06a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:51:40.766295  175432 system_pods.go:74] duration metric: took 4.431457ms to wait for pod list to return data ...
	I0210 11:51:40.766308  175432 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:51:40.769411  175432 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:51:40.769438  175432 node_conditions.go:123] node cpu capacity is 2
	I0210 11:51:40.769451  175432 node_conditions.go:105] duration metric: took 3.132289ms to run NodePressure ...
	I0210 11:51:40.769473  175432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 11:51:41.086960  175432 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 11:51:41.098932  175432 ops.go:34] apiserver oom_adj: -16
	I0210 11:51:41.098960  175432 kubeadm.go:597] duration metric: took 8.427477491s to restartPrimaryControlPlane
	I0210 11:51:41.098972  175432 kubeadm.go:394] duration metric: took 8.470418783s to StartCluster
	I0210 11:51:41.098996  175432 settings.go:142] acquiring lock: {Name:mk1369a4cca9eaf53282144d4cb555c048db8e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:41.099098  175432 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:51:41.100320  175432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-109271/kubeconfig: {Name:mk38b84c4ae8f3ad09ecb56633115faef0fe39c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:51:41.100593  175432 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 11:51:41.100701  175432 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 11:51:41.100794  175432 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-188461"
	I0210 11:51:41.100803  175432 config.go:182] Loaded profile config "newest-cni-188461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:51:41.100819  175432 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-188461"
	W0210 11:51:41.100827  175432 addons.go:247] addon storage-provisioner should already be in state true
	I0210 11:51:41.100817  175432 addons.go:69] Setting default-storageclass=true in profile "newest-cni-188461"
	I0210 11:51:41.100822  175432 addons.go:69] Setting metrics-server=true in profile "newest-cni-188461"
	I0210 11:51:41.100850  175432 addons.go:69] Setting dashboard=true in profile "newest-cni-188461"
	I0210 11:51:41.100852  175432 addons.go:238] Setting addon metrics-server=true in "newest-cni-188461"
	I0210 11:51:41.100860  175432 addons.go:238] Setting addon dashboard=true in "newest-cni-188461"
	I0210 11:51:41.100862  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.100863  175432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-188461"
	W0210 11:51:41.100868  175432 addons.go:247] addon dashboard should already be in state true
	W0210 11:51:41.100872  175432 addons.go:247] addon metrics-server should already be in state true
	I0210 11:51:41.100896  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.100896  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.101280  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101284  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101284  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101297  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.101304  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101306  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101317  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.101331  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.102551  175432 out.go:177] * Verifying Kubernetes components...
	I0210 11:51:41.104005  175432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:51:41.126954  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0210 11:51:41.126969  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34267
	I0210 11:51:41.126987  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0210 11:51:41.126957  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0210 11:51:41.127478  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.127629  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.127758  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.128041  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.128116  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128132  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128297  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128317  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128356  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128380  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.128772  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128775  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128814  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.128869  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.128889  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.129376  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.129425  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.129664  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.129977  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.130022  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.130061  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.130084  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.130105  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.133045  175432 addons.go:238] Setting addon default-storageclass=true in "newest-cni-188461"
	W0210 11:51:41.133067  175432 addons.go:247] addon default-storageclass should already be in state true
	I0210 11:51:41.133099  175432 host.go:66] Checking if "newest-cni-188461" exists ...
	I0210 11:51:41.133468  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.133505  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.151283  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0210 11:51:41.151844  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.152503  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.152516  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.152878  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.153060  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.154241  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0210 11:51:41.155099  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.155177  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.155659  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.155682  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.156073  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.156257  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.157422  175432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:51:41.157807  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.158807  175432 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:51:41.158829  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 11:51:41.158847  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.159480  175432 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 11:51:41.160731  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 11:51:41.160754  175432 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 11:51:41.160771  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.164823  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.165475  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.165588  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.165840  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.166026  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.166161  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.166279  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.166561  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.166895  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.166944  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.167071  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.167255  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37463
	I0210 11:51:41.167365  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.167586  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.167759  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.167785  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.168584  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.168608  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.168951  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.169176  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.170787  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0210 11:51:41.170957  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.171371  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.171901  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.171922  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.172307  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.172722  175432 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 11:51:41.172993  175432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:51:41.173038  175432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:51:41.174922  175432 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 11:51:39.040991  172785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:39.053214  172785 kubeadm.go:597] duration metric: took 4m3.101491896s to restartPrimaryControlPlane
	W0210 11:51:39.053293  172785 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 11:51:39.053321  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:51:39.522357  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:51:39.540499  172785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:51:39.553326  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:51:39.562786  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:51:39.562803  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:51:39.562852  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:51:39.573017  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:51:39.573078  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:51:39.581851  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:51:39.590590  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:51:39.590645  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:51:39.599653  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.608323  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:51:39.608385  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:51:39.617777  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:51:39.626714  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:51:39.626776  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:51:39.636522  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:51:39.840090  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:51:41.176022  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 11:51:41.176045  175432 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 11:51:41.176065  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.179317  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.179726  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.179749  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.179976  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.180142  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.180281  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.180389  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.191261  175432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0210 11:51:41.191669  175432 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:51:41.192145  175432 main.go:141] libmachine: Using API Version  1
	I0210 11:51:41.192168  175432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:51:41.192536  175432 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:51:41.192736  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetState
	I0210 11:51:41.194288  175432 main.go:141] libmachine: (newest-cni-188461) Calling .DriverName
	I0210 11:51:41.194490  175432 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 11:51:41.194509  175432 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 11:51:41.194523  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHHostname
	I0210 11:51:41.197218  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.197921  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHPort
	I0210 11:51:41.197930  175432 main.go:141] libmachine: (newest-cni-188461) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:fb:1e", ip: ""} in network mk-newest-cni-188461: {Iface:virbr1 ExpiryTime:2025-02-10 12:51:16 +0000 UTC Type:0 Mac:52:54:00:25:fb:1e Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:newest-cni-188461 Clientid:01:52:54:00:25:fb:1e}
	I0210 11:51:41.197948  175432 main.go:141] libmachine: (newest-cni-188461) DBG | domain newest-cni-188461 has defined IP address 192.168.39.24 and MAC address 52:54:00:25:fb:1e in network mk-newest-cni-188461
	I0210 11:51:41.198076  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHKeyPath
	I0210 11:51:41.198218  175432 main.go:141] libmachine: (newest-cni-188461) Calling .GetSSHUsername
	I0210 11:51:41.198446  175432 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/newest-cni-188461/id_rsa Username:docker}
	I0210 11:51:41.369336  175432 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:51:41.409927  175432 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:51:41.410008  175432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:51:41.469358  175432 api_server.go:72] duration metric: took 368.71941ms to wait for apiserver process to appear ...
	I0210 11:51:41.469394  175432 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:51:41.469421  175432 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0210 11:51:41.478932  175432 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0210 11:51:41.479821  175432 api_server.go:141] control plane version: v1.32.1
	I0210 11:51:41.479842  175432 api_server.go:131] duration metric: took 10.440148ms to wait for apiserver health ...
	I0210 11:51:41.479849  175432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:51:41.483318  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:51:41.492142  175432 system_pods.go:59] 8 kube-system pods found
	I0210 11:51:41.492175  175432 system_pods.go:61] "coredns-668d6bf9bc-s8bdj" [b89cbee2-a27d-4c8e-950c-b9bb794dca2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 11:51:41.492186  175432 system_pods.go:61] "etcd-newest-cni-188461" [d3f5135e-dc27-4326-8b51-9273547f4ead] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 11:51:41.492198  175432 system_pods.go:61] "kube-apiserver-newest-cni-188461" [b2b151b6-34c2-45f9-b052-4978e1d4c4e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 11:51:41.492205  175432 system_pods.go:61] "kube-controller-manager-newest-cni-188461" [7c5ff0ac-2dd6-4de0-8533-de9235d7ecee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 11:51:41.492211  175432 system_pods.go:61] "kube-proxy-hnd7c" [211dd9a1-4677-4b30-a805-8c44aa78929a] Running
	I0210 11:51:41.492217  175432 system_pods.go:61] "kube-scheduler-newest-cni-188461" [65a9946b-d333-4dca-8047-6243b2233902] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 11:51:41.492225  175432 system_pods.go:61] "metrics-server-f79f97bbb-bfqgl" [994d3cd1-03a9-4bc6-9d1f-726efac9bf56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 11:51:41.492231  175432 system_pods.go:61] "storage-provisioner" [ae729534-6a0a-45a8-82ab-cfcb49ba06a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 11:51:41.492241  175432 system_pods.go:74] duration metric: took 12.386239ms to wait for pod list to return data ...
	I0210 11:51:41.492250  175432 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:51:41.519350  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:51:41.519703  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 11:51:41.519723  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 11:51:41.545596  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 11:51:41.545625  175432 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 11:51:41.558654  175432 default_sa.go:45] found service account: "default"
	I0210 11:51:41.558684  175432 default_sa.go:55] duration metric: took 66.426419ms for default service account to be created ...
	I0210 11:51:41.558700  175432 kubeadm.go:582] duration metric: took 458.068792ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0210 11:51:41.558721  175432 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:51:41.572430  175432 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:51:41.572460  175432 node_conditions.go:123] node cpu capacity is 2
	I0210 11:51:41.572474  175432 node_conditions.go:105] duration metric: took 13.747435ms to run NodePressure ...
	I0210 11:51:41.572491  175432 start.go:241] waiting for startup goroutines ...
	I0210 11:51:41.605452  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 11:51:41.605489  175432 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 11:51:41.688747  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 11:51:41.688776  175432 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 11:51:41.726543  175432 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:51:41.726571  175432 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 11:51:41.757822  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 11:51:41.757858  175432 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 11:51:41.771198  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:51:41.825047  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 11:51:41.825080  175432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 11:51:41.882686  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 11:51:41.882711  175432 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 11:51:41.921482  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 11:51:41.921509  175432 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 11:51:41.939640  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 11:51:41.939672  175432 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 11:51:41.962617  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 11:51:41.962646  175432 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 11:51:42.038983  175432 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:51:42.039022  175432 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 11:51:42.124093  175432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:51:43.223401  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.70401283s)
	I0210 11:51:43.223470  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223483  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223510  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.740158145s)
	I0210 11:51:43.223551  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223567  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223789  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.223808  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.223818  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223825  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.223882  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.223884  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.223899  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.223930  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.223939  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.224164  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.224178  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.224236  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.224256  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.232594  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.232615  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.232981  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.233003  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.232998  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.308633  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.537378605s)
	I0210 11:51:43.308700  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.308717  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.309027  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.309053  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.309066  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.309075  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.309083  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.309347  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.309363  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.309374  175432 addons.go:479] Verifying addon metrics-server=true in "newest-cni-188461"
	I0210 11:51:43.556313  175432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.432154735s)
	I0210 11:51:43.556376  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.556405  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.556687  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.556729  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.556745  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.556755  175432 main.go:141] libmachine: Making call to close driver server
	I0210 11:51:43.556768  175432 main.go:141] libmachine: (newest-cni-188461) Calling .Close
	I0210 11:51:43.557141  175432 main.go:141] libmachine: Successfully made call to close driver server
	I0210 11:51:43.557157  175432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 11:51:43.557176  175432 main.go:141] libmachine: (newest-cni-188461) DBG | Closing plugin on server side
	I0210 11:51:43.558678  175432 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-188461 addons enable metrics-server
	
	I0210 11:51:43.559994  175432 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0210 11:51:43.561282  175432 addons.go:514] duration metric: took 2.460575953s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0210 11:51:43.561329  175432 start.go:246] waiting for cluster config update ...
	I0210 11:51:43.561346  175432 start.go:255] writing updated cluster config ...
	I0210 11:51:43.561735  175432 ssh_runner.go:195] Run: rm -f paused
	I0210 11:51:43.609808  175432 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 11:51:43.611600  175432 out.go:177] * Done! kubectl is now configured to use "newest-cni-188461" cluster and "default" namespace by default
	I0210 11:53:36.111959  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:53:36.112102  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:53:36.113706  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:53:36.113753  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:53:36.113855  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:53:36.114008  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:53:36.114159  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:53:36.114222  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:53:36.115928  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:53:36.116009  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:53:36.116086  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:53:36.116175  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:53:36.116231  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:53:36.116289  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:53:36.116335  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:53:36.116393  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:53:36.116446  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:53:36.116518  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:53:36.116583  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:53:36.116616  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:53:36.116668  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:53:36.116711  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:53:36.116762  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:53:36.116827  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:53:36.116886  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:53:36.116997  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:53:36.117109  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:53:36.117153  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:53:36.117218  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:53:36.118466  172785 out.go:235]   - Booting up control plane ...
	I0210 11:53:36.118539  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:53:36.118608  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:53:36.118679  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:53:36.118787  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:53:36.118909  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:53:36.118953  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:53:36.119006  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119163  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119240  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119382  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119444  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119585  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119661  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.119821  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.119883  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:53:36.120101  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:53:36.120114  172785 kubeadm.go:310] 
	I0210 11:53:36.120147  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:53:36.120183  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:53:36.120193  172785 kubeadm.go:310] 
	I0210 11:53:36.120226  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:53:36.120255  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:53:36.120349  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:53:36.120362  172785 kubeadm.go:310] 
	I0210 11:53:36.120468  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:53:36.120512  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:53:36.120543  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:53:36.120549  172785 kubeadm.go:310] 
	I0210 11:53:36.120653  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:53:36.120728  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:53:36.120736  172785 kubeadm.go:310] 
	I0210 11:53:36.120858  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:53:36.120980  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:53:36.121098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:53:36.121214  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:53:36.121256  172785 kubeadm.go:310] 
	W0210 11:53:36.121387  172785 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 11:53:36.121446  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 11:53:41.570804  172785 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.449332067s)
	I0210 11:53:41.570881  172785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:53:41.583752  172785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:53:41.592553  172785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:53:41.592576  172785 kubeadm.go:157] found existing configuration files:
	
	I0210 11:53:41.592626  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:53:41.600941  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:53:41.601000  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:53:41.609340  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:53:41.617464  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:53:41.617522  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:53:41.625988  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.633984  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:53:41.634044  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:53:41.642503  172785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:53:41.650425  172785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:53:41.650482  172785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:53:41.658856  172785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:53:41.860461  172785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:55:38.137554  172785 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 11:55:38.137647  172785 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 11:55:38.138863  172785 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 11:55:38.138932  172785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:55:38.139057  172785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:55:38.139227  172785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:55:38.139319  172785 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 11:55:38.139374  172785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:55:38.141121  172785 out.go:235]   - Generating certificates and keys ...
	I0210 11:55:38.141232  172785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:55:38.141287  172785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:55:38.141401  172785 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 11:55:38.141504  172785 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 11:55:38.141588  172785 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 11:55:38.141677  172785 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 11:55:38.141766  172785 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 11:55:38.141863  172785 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 11:55:38.141941  172785 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 11:55:38.142049  172785 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 11:55:38.142107  172785 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 11:55:38.142188  172785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:55:38.142262  172785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:55:38.142343  172785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:55:38.142446  172785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:55:38.142524  172785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:55:38.142623  172785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:55:38.142733  172785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:55:38.142772  172785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:55:38.142847  172785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:55:38.144218  172785 out.go:235]   - Booting up control plane ...
	I0210 11:55:38.144323  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:55:38.144400  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:55:38.144457  172785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:55:38.144527  172785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:55:38.144671  172785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 11:55:38.144733  172785 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 11:55:38.144843  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145077  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145155  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145321  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145403  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145599  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145696  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.145874  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.145956  172785 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 11:55:38.146118  172785 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 11:55:38.146130  172785 kubeadm.go:310] 
	I0210 11:55:38.146170  172785 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 11:55:38.146213  172785 kubeadm.go:310] 		timed out waiting for the condition
	I0210 11:55:38.146227  172785 kubeadm.go:310] 
	I0210 11:55:38.146286  172785 kubeadm.go:310] 	This error is likely caused by:
	I0210 11:55:38.146329  172785 kubeadm.go:310] 		- The kubelet is not running
	I0210 11:55:38.146481  172785 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 11:55:38.146492  172785 kubeadm.go:310] 
	I0210 11:55:38.146597  172785 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 11:55:38.146633  172785 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 11:55:38.146662  172785 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 11:55:38.146668  172785 kubeadm.go:310] 
	I0210 11:55:38.146752  172785 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 11:55:38.146820  172785 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 11:55:38.146830  172785 kubeadm.go:310] 
	I0210 11:55:38.146936  172785 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 11:55:38.147020  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 11:55:38.147098  172785 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 11:55:38.147210  172785 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 11:55:38.147271  172785 kubeadm.go:310] 
	I0210 11:55:38.147280  172785 kubeadm.go:394] duration metric: took 8m2.242182664s to StartCluster
	I0210 11:55:38.147337  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:55:38.147399  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:55:38.190552  172785 cri.go:89] found id: ""
	I0210 11:55:38.190585  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.190593  172785 logs.go:284] No container was found matching "kube-apiserver"
	I0210 11:55:38.190601  172785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 11:55:38.190653  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:55:38.223994  172785 cri.go:89] found id: ""
	I0210 11:55:38.224030  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.224041  172785 logs.go:284] No container was found matching "etcd"
	I0210 11:55:38.224050  172785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 11:55:38.224114  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:55:38.254975  172785 cri.go:89] found id: ""
	I0210 11:55:38.255002  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.255013  172785 logs.go:284] No container was found matching "coredns"
	I0210 11:55:38.255021  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:55:38.255087  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:55:38.294383  172785 cri.go:89] found id: ""
	I0210 11:55:38.294412  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.294423  172785 logs.go:284] No container was found matching "kube-scheduler"
	I0210 11:55:38.294431  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:55:38.294481  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:55:38.330915  172785 cri.go:89] found id: ""
	I0210 11:55:38.330943  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.330952  172785 logs.go:284] No container was found matching "kube-proxy"
	I0210 11:55:38.330958  172785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:55:38.331013  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:55:38.368811  172785 cri.go:89] found id: ""
	I0210 11:55:38.368841  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.368849  172785 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 11:55:38.368856  172785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 11:55:38.368912  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:55:38.405782  172785 cri.go:89] found id: ""
	I0210 11:55:38.405809  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.405817  172785 logs.go:284] No container was found matching "kindnet"
	I0210 11:55:38.405822  172785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:55:38.405878  172785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:55:38.443286  172785 cri.go:89] found id: ""
	I0210 11:55:38.443313  172785 logs.go:282] 0 containers: []
	W0210 11:55:38.443320  172785 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 11:55:38.443331  172785 logs.go:123] Gathering logs for dmesg ...
	I0210 11:55:38.443344  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:55:38.457513  172785 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:55:38.457552  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 11:55:38.535390  172785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 11:55:38.535413  172785 logs.go:123] Gathering logs for CRI-O ...
	I0210 11:55:38.535425  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 11:55:38.644609  172785 logs.go:123] Gathering logs for container status ...
	I0210 11:55:38.644644  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:55:38.708870  172785 logs.go:123] Gathering logs for kubelet ...
	I0210 11:55:38.708900  172785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:55:38.771312  172785 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 11:55:38.771377  172785 out.go:270] * 
	W0210 11:55:38.771437  172785 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.771456  172785 out.go:270] * 
	W0210 11:55:38.772241  172785 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:55:38.775175  172785 out.go:201] 
	W0210 11:55:38.776401  172785 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 11:55:38.776449  172785 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 11:55:38.776467  172785 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 11:55:38.777818  172785 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.648458026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189434648427246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b697583b-8d6f-4796-a816-ea57799efac7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.649005675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29093c82-5180-4a25-a1c7-c08d6c0f1e55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.649061826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29093c82-5180-4a25-a1c7-c08d6c0f1e55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.649096025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=29093c82-5180-4a25-a1c7-c08d6c0f1e55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.677531058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23f2a8f7-a84b-4828-8dc9-c2ef7f065f97 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.677612112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23f2a8f7-a84b-4828-8dc9-c2ef7f065f97 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.678553936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab03bee4-6415-4948-bd8b-9b76565c3ba4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.678978146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189434678957398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab03bee4-6415-4948-bd8b-9b76565c3ba4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.679467146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cba44b02-e0e5-424f-83de-862abb63ab5d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.679516260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cba44b02-e0e5-424f-83de-862abb63ab5d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.679551584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cba44b02-e0e5-424f-83de-862abb63ab5d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.708655910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0441dd23-7372-4caf-835c-2e6fc9146731 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.708752651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0441dd23-7372-4caf-835c-2e6fc9146731 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.710067309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd01c8a2-e949-4e90-b25e-9e367e710e47 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.710443538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189434710421363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd01c8a2-e949-4e90-b25e-9e367e710e47 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.710969943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edaae4e3-ef85-49e5-8842-3e7929609bb3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.711044822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edaae4e3-ef85-49e5-8842-3e7929609bb3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.711082944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=edaae4e3-ef85-49e5-8842-3e7929609bb3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.740371986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ffcd189-3424-4390-95a6-3c3704c155b2 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.740459729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ffcd189-3424-4390-95a6-3c3704c155b2 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.741547175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f193d02-bcc7-4155-9fbc-155e9a729879 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.742017832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739189434741978348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f193d02-bcc7-4155-9fbc-155e9a729879 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.742521866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bfcd5f0-85b9-4476-999f-4d3901edceb4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.742581322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bfcd5f0-85b9-4476-999f-4d3901edceb4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:10:34 old-k8s-version-510006 crio[632]: time="2025-02-10 12:10:34.742625655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2bfcd5f0-85b9-4476-999f-4d3901edceb4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054289] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039411] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.995296] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.082058] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.584320] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.340922] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.062802] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054806] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.152386] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.133625] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.265093] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.059229] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.067098] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.246980] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[ +12.002986] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 11:51] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Feb10 11:53] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.060734] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:10:34 up 23 min,  0 users,  load average: 0.03, 0.05, 0.03
	Linux old-k8s-version-510006 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: goroutine 152 [runnable]:
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008af340)
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: goroutine 153 [select]:
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00062bb30, 0xc000a46d01, 0xc0004a9f80, 0xc000a2e9b0, 0xc000148ac0, 0xc000148a80)
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000a46de0, 0x0, 0x0)
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008af340)
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Feb 10 12:10:29 old-k8s-version-510006 kubelet[7121]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Feb 10 12:10:29 old-k8s-version-510006 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 12:10:29 old-k8s-version-510006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 12:10:29 old-k8s-version-510006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 175.
	Feb 10 12:10:29 old-k8s-version-510006 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 12:10:29 old-k8s-version-510006 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 12:10:30 old-k8s-version-510006 kubelet[7130]: I0210 12:10:30.003287    7130 server.go:416] Version: v1.20.0
	Feb 10 12:10:30 old-k8s-version-510006 kubelet[7130]: I0210 12:10:30.003549    7130 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 12:10:30 old-k8s-version-510006 kubelet[7130]: I0210 12:10:30.005405    7130 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 12:10:30 old-k8s-version-510006 kubelet[7130]: W0210 12:10:30.006307    7130 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 10 12:10:30 old-k8s-version-510006 kubelet[7130]: I0210 12:10:30.006504    7130 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 2 (236.683866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-510006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.44s)

                                                
                                    

Test pass (277/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.76
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 12.19
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.13
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.61
22 TestOffline 63.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 138.62
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.51
35 TestAddons/parallel/Registry 17.59
37 TestAddons/parallel/InspektorGadget 11.23
38 TestAddons/parallel/MetricsServer 6.99
40 TestAddons/parallel/CSI 59.88
41 TestAddons/parallel/Headlamp 18.56
42 TestAddons/parallel/CloudSpanner 5.57
43 TestAddons/parallel/LocalPath 56.25
44 TestAddons/parallel/NvidiaDevicePlugin 6.6
45 TestAddons/parallel/Yakd 11.71
47 TestAddons/StoppedEnableDisable 91.24
48 TestCertOptions 58.43
49 TestCertExpiration 321.78
51 TestForceSystemdFlag 80.77
52 TestForceSystemdEnv 72.05
54 TestKVMDriverInstallOrUpdate 3.58
58 TestErrorSpam/setup 39.08
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.54
62 TestErrorSpam/unpause 1.66
63 TestErrorSpam/stop 5.41
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 55.08
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 45.43
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.79
75 TestFunctional/serial/CacheCmd/cache/add_local 2.46
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 40.87
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.35
86 TestFunctional/serial/LogsFileCmd 1.39
87 TestFunctional/serial/InvalidService 4.26
89 TestFunctional/parallel/ConfigCmd 0.32
90 TestFunctional/parallel/DashboardCmd 12.47
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.45
93 TestFunctional/parallel/StatusCmd 1.07
97 TestFunctional/parallel/ServiceCmdConnect 11.47
98 TestFunctional/parallel/AddonsCmd 0.12
99 TestFunctional/parallel/PersistentVolumeClaim 45.79
101 TestFunctional/parallel/SSHCmd 0.47
102 TestFunctional/parallel/CpCmd 1.36
103 TestFunctional/parallel/MySQL 24.12
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.21
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
113 TestFunctional/parallel/License 1.49
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/Setup 1.87
129 TestFunctional/parallel/ServiceCmd/DeployApp 20.15
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.81
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.63
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.85
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
138 TestFunctional/parallel/ProfileCmd/profile_list 0.42
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
140 TestFunctional/parallel/MountCmd/any-port 9.18
141 TestFunctional/parallel/MountCmd/specific-port 1.9
142 TestFunctional/parallel/ServiceCmd/List 0.89
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.96
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.04
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
146 TestFunctional/parallel/ServiceCmd/Format 0.34
147 TestFunctional/parallel/ServiceCmd/URL 0.33
148 TestFunctional/parallel/Version/short 0.05
149 TestFunctional/parallel/Version/components 0.43
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 190.76
161 TestMultiControlPlane/serial/DeployApp 7.24
162 TestMultiControlPlane/serial/PingHostFromPods 1.14
163 TestMultiControlPlane/serial/AddWorkerNode 60.1
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
166 TestMultiControlPlane/serial/CopyFile 12.74
167 TestMultiControlPlane/serial/StopSecondaryNode 91.62
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
169 TestMultiControlPlane/serial/RestartSecondaryNode 50.3
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 428.1
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.05
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
174 TestMultiControlPlane/serial/StopCluster 272.9
175 TestMultiControlPlane/serial/RestartCluster 110.94
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
177 TestMultiControlPlane/serial/AddSecondaryNode 80.64
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
182 TestJSONOutput/start/Command 53.95
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.65
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.59
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.35
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 93.27
214 TestMountStart/serial/StartWithMountFirst 27.24
215 TestMountStart/serial/VerifyMountFirst 0.38
216 TestMountStart/serial/StartWithMountSecond 27.17
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.89
219 TestMountStart/serial/VerifyMountPostDelete 0.38
220 TestMountStart/serial/Stop 1.27
221 TestMountStart/serial/RestartStopped 22
222 TestMountStart/serial/VerifyMountPostStop 0.37
225 TestMultiNode/serial/FreshStart2Nodes 113.35
226 TestMultiNode/serial/DeployApp2Nodes 5.75
227 TestMultiNode/serial/PingHostFrom2Pods 0.78
228 TestMultiNode/serial/AddNode 52.04
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.6
231 TestMultiNode/serial/CopyFile 7.21
232 TestMultiNode/serial/StopNode 2.31
233 TestMultiNode/serial/StartAfterStop 39.37
234 TestMultiNode/serial/RestartKeepsNodes 347.97
235 TestMultiNode/serial/DeleteNode 2.77
236 TestMultiNode/serial/StopMultiNode 182.06
237 TestMultiNode/serial/RestartMultiNode 115.76
238 TestMultiNode/serial/ValidateNameConflict 43.86
245 TestScheduledStopUnix 112.69
249 TestRunningBinaryUpgrade 221.67
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/StartWithK8s 96.91
256 TestNoKubernetes/serial/StartWithStopK8s 63.77
257 TestNoKubernetes/serial/Start 50.29
265 TestNetworkPlugins/group/false 3.34
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
270 TestNoKubernetes/serial/ProfileList 3.65
271 TestNoKubernetes/serial/Stop 1.33
272 TestNoKubernetes/serial/StartNoArgs 46.13
273 TestStoppedBinaryUpgrade/Setup 3.16
274 TestStoppedBinaryUpgrade/Upgrade 125.51
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
284 TestPause/serial/Start 75.69
285 TestPause/serial/SecondStartNoReconfiguration 40.65
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
287 TestNetworkPlugins/group/auto/Start 56.53
288 TestPause/serial/Pause 0.65
289 TestPause/serial/VerifyStatus 0.26
290 TestPause/serial/Unpause 0.67
291 TestPause/serial/PauseAgain 0.76
292 TestPause/serial/DeletePaused 0.9
293 TestPause/serial/VerifyDeletedResources 0.7
294 TestNetworkPlugins/group/kindnet/Start 66.24
295 TestNetworkPlugins/group/calico/Start 100.61
296 TestNetworkPlugins/group/auto/KubeletFlags 0.26
297 TestNetworkPlugins/group/auto/NetCatPod 13.33
298 TestNetworkPlugins/group/auto/DNS 0.14
299 TestNetworkPlugins/group/auto/Localhost 0.12
300 TestNetworkPlugins/group/auto/HairPin 0.13
301 TestNetworkPlugins/group/custom-flannel/Start 70.98
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
304 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
305 TestNetworkPlugins/group/kindnet/DNS 0.13
306 TestNetworkPlugins/group/kindnet/Localhost 0.13
307 TestNetworkPlugins/group/kindnet/HairPin 0.11
308 TestNetworkPlugins/group/enable-default-cni/Start 58.27
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.2
311 TestNetworkPlugins/group/calico/NetCatPod 11.23
312 TestNetworkPlugins/group/calico/DNS 0.2
313 TestNetworkPlugins/group/calico/Localhost 0.13
314 TestNetworkPlugins/group/calico/HairPin 0.13
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.28
317 TestNetworkPlugins/group/custom-flannel/DNS 0.17
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
320 TestNetworkPlugins/group/flannel/Start 70.22
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
323 TestNetworkPlugins/group/bridge/Start 66.67
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
331 TestNetworkPlugins/group/flannel/NetCatPod 9.27
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
333 TestNetworkPlugins/group/bridge/NetCatPod 10.23
334 TestNetworkPlugins/group/flannel/DNS 0.16
335 TestNetworkPlugins/group/flannel/Localhost 0.13
336 TestNetworkPlugins/group/flannel/HairPin 0.13
337 TestNetworkPlugins/group/bridge/DNS 0.21
338 TestNetworkPlugins/group/bridge/Localhost 0.18
339 TestNetworkPlugins/group/bridge/HairPin 0.17
341 TestStartStop/group/no-preload/serial/FirstStart 77.16
343 TestStartStop/group/embed-certs/serial/FirstStart 79.14
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.24
346 TestStartStop/group/no-preload/serial/DeployApp 10.67
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.45
348 TestStartStop/group/embed-certs/serial/DeployApp 10.3
349 TestStartStop/group/no-preload/serial/Stop 91.03
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
351 TestStartStop/group/embed-certs/serial/Stop 91.47
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.39
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
356 TestStartStop/group/no-preload/serial/SecondStart 349.02
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/embed-certs/serial/SecondStart 310.29
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.88
363 TestStartStop/group/old-k8s-version/serial/Stop 3.3
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
368 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
369 TestStartStop/group/embed-certs/serial/Pause 2.83
371 TestStartStop/group/newest-cni/serial/FirstStart 49.36
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
375 TestStartStop/group/no-preload/serial/Pause 3.17
376 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
378 TestStartStop/group/newest-cni/serial/DeployApp 0
379 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
380 TestStartStop/group/newest-cni/serial/Stop 10.35
381 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
382 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
384 TestStartStop/group/newest-cni/serial/SecondStart 38.1
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
388 TestStartStop/group/newest-cni/serial/Pause 2.27
x
+
TestDownloadOnly/v1.20.0/json-events (24.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-052291 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-052291 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.754765141s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0210 10:33:20.024228  116470 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0210 10:33:20.024335  116470 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-052291
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-052291: exit status 85 (62.937717ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-052291 | jenkins | v1.35.0 | 10 Feb 25 10:32 UTC |          |
	|         | -p download-only-052291        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:32:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:32:55.312894  116482 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:32:55.313017  116482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:32:55.313028  116482 out.go:358] Setting ErrFile to fd 2...
	I0210 10:32:55.313035  116482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:32:55.313274  116482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	W0210 10:32:55.313428  116482 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20385-109271/.minikube/config/config.json: open /home/jenkins/minikube-integration/20385-109271/.minikube/config/config.json: no such file or directory
	I0210 10:32:55.314029  116482 out.go:352] Setting JSON to true
	I0210 10:32:55.314922  116482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4517,"bootTime":1739179058,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:32:55.315025  116482 start.go:139] virtualization: kvm guest
	I0210 10:32:55.317330  116482 out.go:97] [download-only-052291] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0210 10:32:55.317447  116482 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball: no such file or directory
	I0210 10:32:55.317479  116482 notify.go:220] Checking for updates...
	I0210 10:32:55.318794  116482 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:32:55.320035  116482 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:32:55.321090  116482 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 10:32:55.322147  116482 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 10:32:55.323241  116482 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 10:32:55.325396  116482 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 10:32:55.325645  116482 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:32:55.424584  116482 out.go:97] Using the kvm2 driver based on user configuration
	I0210 10:32:55.424613  116482 start.go:297] selected driver: kvm2
	I0210 10:32:55.424619  116482 start.go:901] validating driver "kvm2" against <nil>
	I0210 10:32:55.424970  116482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:32:55.425121  116482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 10:32:55.441055  116482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 10:32:55.441118  116482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:32:55.442410  116482 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0210 10:32:55.442741  116482 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 10:32:55.442791  116482 cni.go:84] Creating CNI manager for ""
	I0210 10:32:55.442840  116482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 10:32:55.442857  116482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 10:32:55.442960  116482 start.go:340] cluster config:
	{Name:download-only-052291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-052291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:32:55.443299  116482 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:32:55.445020  116482 out.go:97] Downloading VM boot image ...
	I0210 10:32:55.445058  116482 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20385-109271/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 10:33:05.762892  116482 out.go:97] Starting "download-only-052291" primary control-plane node in "download-only-052291" cluster
	I0210 10:33:05.762920  116482 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 10:33:05.861604  116482 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 10:33:05.861635  116482 cache.go:56] Caching tarball of preloaded images
	I0210 10:33:05.861798  116482 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 10:33:05.863681  116482 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0210 10:33:05.863699  116482 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0210 10:33:06.040954  116482 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-052291 host does not exist
	  To start a cluster, run: "minikube start -p download-only-052291"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-052291
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (12.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-183974 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-183974 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.193365073s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (12.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0210 10:33:32.545958  116470 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0210 10:33:32.546002  116470 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-183974
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-183974: exit status 85 (59.701859ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-052291 | jenkins | v1.35.0 | 10 Feb 25 10:32 UTC |                     |
	|         | -p download-only-052291        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC | 10 Feb 25 10:33 UTC |
	| delete  | -p download-only-052291        | download-only-052291 | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC | 10 Feb 25 10:33 UTC |
	| start   | -o=json --download-only        | download-only-183974 | jenkins | v1.35.0 | 10 Feb 25 10:33 UTC |                     |
	|         | -p download-only-183974        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:33:20
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:33:20.393196  116740 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:33:20.393289  116740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:33:20.393293  116740 out.go:358] Setting ErrFile to fd 2...
	I0210 10:33:20.393297  116740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:33:20.393511  116740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 10:33:20.394058  116740 out.go:352] Setting JSON to true
	I0210 10:33:20.394903  116740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4542,"bootTime":1739179058,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:33:20.395003  116740 start.go:139] virtualization: kvm guest
	I0210 10:33:20.397039  116740 out.go:97] [download-only-183974] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 10:33:20.397179  116740 notify.go:220] Checking for updates...
	I0210 10:33:20.398465  116740 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:33:20.399692  116740 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:33:20.400888  116740 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 10:33:20.402000  116740 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 10:33:20.403082  116740 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 10:33:20.404986  116740 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 10:33:20.405184  116740 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:33:20.438235  116740 out.go:97] Using the kvm2 driver based on user configuration
	I0210 10:33:20.438263  116740 start.go:297] selected driver: kvm2
	I0210 10:33:20.438271  116740 start.go:901] validating driver "kvm2" against <nil>
	I0210 10:33:20.438698  116740 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:33:20.438797  116740 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20385-109271/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 10:33:20.453748  116740 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 10:33:20.453797  116740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:33:20.454593  116740 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0210 10:33:20.454796  116740 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 10:33:20.454833  116740 cni.go:84] Creating CNI manager for ""
	I0210 10:33:20.454895  116740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 10:33:20.454912  116740 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 10:33:20.454982  116740 start.go:340] cluster config:
	{Name:download-only-183974 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-183974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:33:20.455103  116740 iso.go:125] acquiring lock: {Name:mk479d49a84808a4b16be867aad83d1d3d802291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:33:20.456780  116740 out.go:97] Starting "download-only-183974" primary control-plane node in "download-only-183974" cluster
	I0210 10:33:20.457026  116740 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 10:33:20.934309  116740 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 10:33:20.934359  116740 cache.go:56] Caching tarball of preloaded images
	I0210 10:33:20.934557  116740 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 10:33:20.936406  116740 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0210 10:33:20.936434  116740 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0210 10:33:21.035710  116740 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20385-109271/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-183974 host does not exist
	  To start a cluster, run: "minikube start -p download-only-183974"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-183974
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0210 10:33:33.120876  116470 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-335395 --alsologtostderr --binary-mirror http://127.0.0.1:45531 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-335395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-335395
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (63.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-281749 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-281749 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.60663579s)
helpers_test.go:175: Cleaning up "offline-crio-281749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-281749
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-281749: (1.022762547s)
--- PASS: TestOffline (63.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-176336
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-176336: exit status 85 (53.496544ms)

                                                
                                                
-- stdout --
	* Profile "addons-176336" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-176336"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-176336
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-176336: exit status 85 (54.082469ms)

                                                
                                                
-- stdout --
	* Profile "addons-176336" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-176336"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (138.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-176336 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-176336 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m18.615317746s)
--- PASS: TestAddons/Setup (138.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-176336 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-176336 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-176336 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-176336 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e0f6684-3b28-497f-b99e-d8ce49ab2130] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e0f6684-3b28-497f-b99e-d8ce49ab2130] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.00309942s
addons_test.go:633: (dbg) Run:  kubectl --context addons-176336 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-176336 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-176336 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.487506ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-h788n" [de15c872-5255-4828-89b5-5881bd20be96] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002201105s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pz2jr" [c39641af-4408-48ac-ad63-707a945defdb] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003198142s
addons_test.go:331: (dbg) Run:  kubectl --context addons-176336 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-176336 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-176336 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.707208925s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 ip
2025/02/10 10:36:29 [DEBUG] GET http://192.168.39.19:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.59s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tt54k" [e42cc214-5aad-46e6-bb68-bcf55ef21760] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004128004s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 addons disable inspektor-gadget --alsologtostderr -v=1: (6.219905673s)
--- PASS: TestAddons/parallel/InspektorGadget (11.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.349748ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-8zxm2" [dadc63e3-cb8c-4654-a037-f61e6fd19b18] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003153993s
addons_test.go:402: (dbg) Run:  kubectl --context addons-176336 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0210 10:36:25.500836  116470 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0210 10:36:25.505136  116470 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 10:36:25.505161  116470 kapi.go:107] duration metric: took 4.341759ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.350516ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-176336 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-176336 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3330b61d-aced-448f-a71e-6471f7625194] Pending
helpers_test.go:344: "task-pv-pod" [3330b61d-aced-448f-a71e-6471f7625194] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3330b61d-aced-448f-a71e-6471f7625194] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.003666639s
addons_test.go:511: (dbg) Run:  kubectl --context addons-176336 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-176336 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-176336 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-176336 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-176336 delete pod task-pv-pod: (1.01956594s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-176336 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-176336 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-176336 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [853c4728-1092-4951-805d-db078866aa70] Pending
helpers_test.go:344: "task-pv-pod-restore" [853c4728-1092-4951-805d-db078866aa70] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [853c4728-1092-4951-805d-db078866aa70] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003582495s
addons_test.go:553: (dbg) Run:  kubectl --context addons-176336 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-176336 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-176336 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.789206263s)
--- PASS: TestAddons/parallel/CSI (59.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-176336 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-96ws8" [fad39b1c-de46-4bd7-9168-3ffede8e53a7] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-96ws8" [fad39b1c-de46-4bd7-9168-3ffede8e53a7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-96ws8" [fad39b1c-de46-4bd7-9168-3ffede8e53a7] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003415254s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 addons disable headlamp --alsologtostderr -v=1: (5.716988139s)
--- PASS: TestAddons/parallel/Headlamp (18.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-qlfkc" [b240fa26-07c9-4d7b-a37f-b24fe2cb3e1e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004137358s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-176336 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-176336 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cd5e3e4c-229e-4510-86a7-4af9ea35d0ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cd5e3e4c-229e-4510-86a7-4af9ea35d0ef] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cd5e3e4c-229e-4510-86a7-4af9ea35d0ef] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00269109s
addons_test.go:906: (dbg) Run:  kubectl --context addons-176336 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 ssh "cat /opt/local-path-provisioner/pvc-900f7ab4-d741-40ec-972f-db46b21c9e8e_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-176336 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-176336 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.337780793s)
--- PASS: TestAddons/parallel/LocalPath (56.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t7lzz" [7a0f2255-4f39-406e-a4d4-d3339799a3cf] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004124347s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-tvl99" [14282bce-e255-4fce-8f0e-7a40fad35d45] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004119284s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-176336 addons disable yakd --alsologtostderr -v=1: (5.708570061s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-176336
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-176336: (1m30.955928313s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-176336
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-176336
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-176336
--- PASS: TestAddons/StoppedEnableDisable (91.24s)

                                                
                                    
x
+
TestCertOptions (58.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-322986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-322986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (57.190321309s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-322986 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-322986 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-322986 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-322986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-322986
--- PASS: TestCertOptions (58.43s)

                                                
                                    
x
+
TestCertExpiration (321.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-038969 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-038969 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m38.336612499s)
E0210 11:34:06.276502  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-038969 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-038969 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.599298137s)
helpers_test.go:175: Cleaning up "cert-expiration-038969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-038969
--- PASS: TestCertExpiration (321.78s)

                                                
                                    
x
+
TestForceSystemdFlag (80.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-016028 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-016028 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.551379626s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-016028 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-016028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-016028
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-016028: (1.022755164s)
--- PASS: TestForceSystemdFlag (80.77s)

                                                
                                    
x
+
TestForceSystemdEnv (72.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-588458 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-588458 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.089592131s)
helpers_test.go:175: Cleaning up "force-systemd-env-588458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-588458
--- PASS: TestForceSystemdEnv (72.05s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0210 11:34:43.846098  116470 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 11:34:43.846303  116470 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0210 11:34:43.880307  116470 install.go:62] docker-machine-driver-kvm2: exit status 1
W0210 11:34:43.880705  116470 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 11:34:43.880775  116470 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3150441251/001/docker-machine-driver-kvm2
I0210 11:34:44.079802  116470 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3150441251/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000525a18 gz:0xc000525ab0 tar:0xc000525a50 tar.bz2:0xc000525a60 tar.gz:0xc000525a80 tar.xz:0xc000525a90 tar.zst:0xc000525aa0 tbz2:0xc000525a60 tgz:0xc000525a80 txz:0xc000525a90 tzst:0xc000525aa0 xz:0xc000525ab8 zip:0xc000525ac0 zst:0xc000525ad0] Getters:map[file:0xc0019faf10 http:0xc0007cca00 https:0xc0007cca50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 11:34:44.079866  116470 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3150441251/001/docker-machine-driver-kvm2
I0210 11:34:45.724875  116470 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 11:34:45.724981  116470 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0210 11:34:45.756497  116470 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0210 11:34:45.756541  116470 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0210 11:34:45.756621  116470 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 11:34:45.756660  116470 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3150441251/002/docker-machine-driver-kvm2
I0210 11:34:45.784623  116470 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3150441251/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000525a18 gz:0xc000525ab0 tar:0xc000525a50 tar.bz2:0xc000525a60 tar.gz:0xc000525a80 tar.xz:0xc000525a90 tar.zst:0xc000525aa0 tbz2:0xc000525a60 tgz:0xc000525a80 txz:0xc000525a90 tzst:0xc000525aa0 xz:0xc000525ab8 zip:0xc000525ac0 zst:0xc000525ad0] Getters:map[file:0xc002082720 http:0xc000c310e0 https:0xc000c31130] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 11:34:45.784685  116470 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3150441251/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.58s)

                                                
                                    
x
+
TestErrorSpam/setup (39.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-358124 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-358124 --driver=kvm2  --container-runtime=crio
E0210 10:40:53.032576  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:53.039088  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:53.050446  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:53.071794  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:53.113208  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:53.194666  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:53.356243  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:53.677779  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:54.319826  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:55.601279  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:58.163108  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:41:03.284690  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:41:13.526998  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-358124 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-358124 --driver=kvm2  --container-runtime=crio: (39.081652515s)
--- PASS: TestErrorSpam/setup (39.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (5.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 stop: (1.627824076s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 stop: (1.724879304s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-358124 --log_dir /tmp/nospam-358124 stop: (2.052660737s)
--- PASS: TestErrorSpam/stop (5.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20385-109271/.minikube/files/etc/test/nested/copy/116470/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567541 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0210 10:41:34.008481  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:42:14.970036  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-567541 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.082635666s)
--- PASS: TestFunctional/serial/StartWithProxy (55.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0210 10:42:22.747114  116470 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567541 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-567541 --alsologtostderr -v=8: (45.428100792s)
functional_test.go:680: soft start took 45.428920743s for "functional-567541" cluster.
I0210 10:43:08.175669  116470 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (45.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-567541 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 cache add registry.k8s.io/pause:3.1: (1.534921609s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 cache add registry.k8s.io/pause:3.3: (1.642133807s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 cache add registry.k8s.io/pause:latest: (1.609476688s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-567541 /tmp/TestFunctionalserialCacheCmdcacheadd_local1385955335/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cache add minikube-local-cache-test:functional-567541
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 cache add minikube-local-cache-test:functional-567541: (2.154811251s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cache delete minikube-local-cache-test:functional-567541
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-567541
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (209.242888ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 cache reload: (1.403330435s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 kubectl -- --context functional-567541 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-567541 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567541 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0210 10:43:36.891442  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-567541 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.869722817s)
functional_test.go:778: restart took 40.869876391s for "functional-567541" cluster.
I0210 10:43:59.146410  116470 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (40.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-567541 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 logs: (1.344732858s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 logs --file /tmp/TestFunctionalserialLogsFileCmd2441706742/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 logs --file /tmp/TestFunctionalserialLogsFileCmd2441706742/001/logs.txt: (1.389783091s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-567541 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-567541
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-567541: exit status 115 (259.642975ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.8:31377 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-567541 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 config get cpus: exit status 14 (50.578568ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 config get cpus: exit status 14 (47.841972ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-567541 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-567541 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 124274: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-567541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (135.468047ms)

                                                
                                                
-- stdout --
	* [functional-567541] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:44:19.683390  124167 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:44:19.683507  124167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:44:19.683518  124167 out.go:358] Setting ErrFile to fd 2...
	I0210 10:44:19.683525  124167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:44:19.684194  124167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 10:44:19.685241  124167 out.go:352] Setting JSON to false
	I0210 10:44:19.686391  124167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5202,"bootTime":1739179058,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:44:19.686490  124167 start.go:139] virtualization: kvm guest
	I0210 10:44:19.688143  124167 out.go:177] * [functional-567541] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 10:44:19.689651  124167 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:44:19.689651  124167 notify.go:220] Checking for updates...
	I0210 10:44:19.691022  124167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:44:19.692229  124167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 10:44:19.693451  124167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 10:44:19.694519  124167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 10:44:19.695653  124167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:44:19.697357  124167 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 10:44:19.697725  124167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:44:19.697775  124167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:44:19.713481  124167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0210 10:44:19.713976  124167 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:44:19.714552  124167 main.go:141] libmachine: Using API Version  1
	I0210 10:44:19.714604  124167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:44:19.714997  124167 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:44:19.715248  124167 main.go:141] libmachine: (functional-567541) Calling .DriverName
	I0210 10:44:19.715532  124167 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:44:19.715819  124167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:44:19.715858  124167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:44:19.730875  124167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40921
	I0210 10:44:19.731370  124167 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:44:19.731790  124167 main.go:141] libmachine: Using API Version  1
	I0210 10:44:19.731813  124167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:44:19.732160  124167 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:44:19.732357  124167 main.go:141] libmachine: (functional-567541) Calling .DriverName
	I0210 10:44:19.764490  124167 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 10:44:19.765370  124167 start.go:297] selected driver: kvm2
	I0210 10:44:19.765384  124167 start.go:901] validating driver "kvm2" against &{Name:functional-567541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-567541 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:44:19.765493  124167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:44:19.767304  124167 out.go:201] 
	W0210 10:44:19.768389  124167 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0210 10:44:19.769582  124167 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567541 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-567541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (445.716345ms)

                                                
                                                
-- stdout --
	* [functional-567541] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:44:19.252112  124136 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:44:19.252235  124136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:44:19.252245  124136 out.go:358] Setting ErrFile to fd 2...
	I0210 10:44:19.252252  124136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:44:19.252625  124136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 10:44:19.253310  124136 out.go:352] Setting JSON to false
	I0210 10:44:19.254567  124136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5201,"bootTime":1739179058,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 10:44:19.254659  124136 start.go:139] virtualization: kvm guest
	I0210 10:44:19.257013  124136 out.go:177] * [functional-567541] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0210 10:44:19.258244  124136 notify.go:220] Checking for updates...
	I0210 10:44:19.258262  124136 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:44:19.259418  124136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:44:19.260552  124136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 10:44:19.261610  124136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 10:44:19.262712  124136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 10:44:19.263940  124136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:44:19.265644  124136 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 10:44:19.266270  124136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:44:19.266395  124136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:44:19.283369  124136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0210 10:44:19.283871  124136 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:44:19.284451  124136 main.go:141] libmachine: Using API Version  1
	I0210 10:44:19.284476  124136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:44:19.284976  124136 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:44:19.285239  124136 main.go:141] libmachine: (functional-567541) Calling .DriverName
	I0210 10:44:19.285564  124136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:44:19.286011  124136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:44:19.286078  124136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:44:19.303407  124136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I0210 10:44:19.303961  124136 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:44:19.304666  124136 main.go:141] libmachine: Using API Version  1
	I0210 10:44:19.304693  124136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:44:19.305141  124136 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:44:19.305360  124136 main.go:141] libmachine: (functional-567541) Calling .DriverName
	I0210 10:44:19.417836  124136 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0210 10:44:19.522993  124136 start.go:297] selected driver: kvm2
	I0210 10:44:19.523042  124136 start.go:901] validating driver "kvm2" against &{Name:functional-567541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-567541 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:44:19.523256  124136 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:44:19.628308  124136 out.go:201] 
	W0210 10:44:19.631057  124136 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 10:44:19.632520  124136 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-567541 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-567541 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-49cpf" [1bddfbda-e1f5-4bc7-9e51-73b29648fa2b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-49cpf" [1bddfbda-e1f5-4bc7-9e51-73b29648fa2b] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003401198s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.8:31196
functional_test.go:1692: http://192.168.39.8:31196: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-49cpf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.8:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.8:31196
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [59e07e20-824f-40b2-b87f-59044d03edc9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003319649s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-567541 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-567541 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-567541 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-567541 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1b72e5c-4647-48f9-a425-c192e1064fab] Pending
helpers_test.go:344: "sp-pod" [a1b72e5c-4647-48f9-a425-c192e1064fab] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a1b72e5c-4647-48f9-a425-c192e1064fab] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003447267s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-567541 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-567541 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-567541 delete -f testdata/storage-provisioner/pod.yaml: (3.000860816s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-567541 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [68d8c040-39fb-41c4-8083-87e7d3a17d9b] Pending
helpers_test.go:344: "sp-pod" [68d8c040-39fb-41c4-8083-87e7d3a17d9b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/02/10 10:44:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [68d8c040-39fb-41c4-8083-87e7d3a17d9b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003235s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-567541 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh -n functional-567541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cp functional-567541:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2017655888/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh -n functional-567541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh -n functional-567541 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-567541 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-8p9nf" [6384538a-46a8-4405-9744-bb1003f64eba] Pending
helpers_test.go:344: "mysql-58ccfd96bb-8p9nf" [6384538a-46a8-4405-9744-bb1003f64eba] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-8p9nf" [6384538a-46a8-4405-9744-bb1003f64eba] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003714535s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-567541 exec mysql-58ccfd96bb-8p9nf -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-567541 exec mysql-58ccfd96bb-8p9nf -- mysql -ppassword -e "show databases;": exit status 1 (113.124579ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 10:44:52.930312  116470 retry.go:31] will retry after 694.822158ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-567541 exec mysql-58ccfd96bb-8p9nf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/116470/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo cat /etc/test/nested/copy/116470/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/116470.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo cat /etc/ssl/certs/116470.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/116470.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo cat /usr/share/ca-certificates/116470.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/1164702.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo cat /etc/ssl/certs/1164702.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/1164702.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo cat /usr/share/ca-certificates/1164702.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-567541 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 ssh "sudo systemctl is-active docker": exit status 1 (237.418352ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 ssh "sudo systemctl is-active containerd": exit status 1 (262.556908ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2305: (dbg) Done: out/minikube-linux-amd64 license: (1.492603135s)
--- PASS: TestFunctional/parallel/License (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567541 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-567541
localhost/kicbase/echo-server:functional-567541
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567541 image ls --format short --alsologtostderr:
I0210 10:44:33.259384  125206 out.go:345] Setting OutFile to fd 1 ...
I0210 10:44:33.259494  125206 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.259503  125206 out.go:358] Setting ErrFile to fd 2...
I0210 10:44:33.259507  125206 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.259667  125206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
I0210 10:44:33.260272  125206 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.260371  125206 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.260719  125206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.260776  125206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.276824  125206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
I0210 10:44:33.277281  125206 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.277803  125206 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.277822  125206 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.278226  125206 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.278415  125206 main.go:141] libmachine: (functional-567541) Calling .GetState
I0210 10:44:33.280142  125206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.280188  125206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.295526  125206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45731
I0210 10:44:33.295900  125206 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.296314  125206 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.296330  125206 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.296639  125206 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.296840  125206 main.go:141] libmachine: (functional-567541) Calling .DriverName
I0210 10:44:33.297041  125206 ssh_runner.go:195] Run: systemctl --version
I0210 10:44:33.297066  125206 main.go:141] libmachine: (functional-567541) Calling .GetSSHHostname
I0210 10:44:33.299745  125206 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.300103  125206 main.go:141] libmachine: (functional-567541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3f:3d", ip: ""} in network mk-functional-567541: {Iface:virbr1 ExpiryTime:2025-02-10 11:41:42 +0000 UTC Type:0 Mac:52:54:00:fe:3f:3d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-567541 Clientid:01:52:54:00:fe:3f:3d}
I0210 10:44:33.300126  125206 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined IP address 192.168.39.8 and MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.300259  125206 main.go:141] libmachine: (functional-567541) Calling .GetSSHPort
I0210 10:44:33.300458  125206 main.go:141] libmachine: (functional-567541) Calling .GetSSHKeyPath
I0210 10:44:33.300613  125206 main.go:141] libmachine: (functional-567541) Calling .GetSSHUsername
I0210 10:44:33.300745  125206 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/functional-567541/id_rsa Username:docker}
I0210 10:44:33.382622  125206 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 10:44:33.426717  125206 main.go:141] libmachine: Making call to close driver server
I0210 10:44:33.426735  125206 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:33.427071  125206 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:33.427106  125206 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:44:33.427119  125206 main.go:141] libmachine: Making call to close driver server
I0210 10:44:33.427127  125206 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:33.427385  125206 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:33.427400  125206 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567541 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| localhost/minikube-local-cache-test     | functional-567541  | 4f208b80a0fc4 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| docker.io/library/nginx                 | latest             | 97662d24417b3 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-567541  | 9056ab77afb8e | 4.94MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567541 image ls --format table --alsologtostderr:
I0210 10:44:34.017527  125399 out.go:345] Setting OutFile to fd 1 ...
I0210 10:44:34.017630  125399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:34.017638  125399 out.go:358] Setting ErrFile to fd 2...
I0210 10:44:34.017642  125399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:34.017815  125399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
I0210 10:44:34.018411  125399 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:34.018515  125399 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:34.018862  125399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:34.018912  125399 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:34.034236  125399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42393
I0210 10:44:34.034684  125399 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:34.035281  125399 main.go:141] libmachine: Using API Version  1
I0210 10:44:34.035315  125399 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:34.035680  125399 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:34.035932  125399 main.go:141] libmachine: (functional-567541) Calling .GetState
I0210 10:44:34.038013  125399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:34.038052  125399 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:34.055398  125399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
I0210 10:44:34.055814  125399 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:34.056495  125399 main.go:141] libmachine: Using API Version  1
I0210 10:44:34.056534  125399 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:34.056896  125399 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:34.057141  125399 main.go:141] libmachine: (functional-567541) Calling .DriverName
I0210 10:44:34.057416  125399 ssh_runner.go:195] Run: systemctl --version
I0210 10:44:34.057448  125399 main.go:141] libmachine: (functional-567541) Calling .GetSSHHostname
I0210 10:44:34.060638  125399 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:34.061011  125399 main.go:141] libmachine: (functional-567541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3f:3d", ip: ""} in network mk-functional-567541: {Iface:virbr1 ExpiryTime:2025-02-10 11:41:42 +0000 UTC Type:0 Mac:52:54:00:fe:3f:3d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-567541 Clientid:01:52:54:00:fe:3f:3d}
I0210 10:44:34.061044  125399 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined IP address 192.168.39.8 and MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:34.061172  125399 main.go:141] libmachine: (functional-567541) Calling .GetSSHPort
I0210 10:44:34.061369  125399 main.go:141] libmachine: (functional-567541) Calling .GetSSHKeyPath
I0210 10:44:34.061563  125399 main.go:141] libmachine: (functional-567541) Calling .GetSSHUsername
I0210 10:44:34.061702  125399 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/functional-567541/id_rsa Username:docker}
I0210 10:44:34.143857  125399 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 10:44:34.187296  125399 main.go:141] libmachine: Making call to close driver server
I0210 10:44:34.187322  125399 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:34.187642  125399 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:34.187654  125399 main.go:141] libmachine: (functional-567541) DBG | Closing plugin on server side
I0210 10:44:34.187661  125399 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:44:34.187680  125399 main.go:141] libmachine: Making call to close driver server
I0210 10:44:34.187690  125399 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:34.187927  125399 main.go:141] libmachine: (functional-567541) DBG | Closing plugin on server side
I0210 10:44:34.187965  125399 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:34.188002  125399 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567541 image ls --format json --alsologtostderr:
[{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"4f208b80a0fc4ad293e8bd47749f40ea061a9e07dacceb80267f25d3a50954c2","repoDigests":["localhost/minikube-local-cache-test@sha256:2f6ebad45963957986698bfa064c8a0ca6db159c067428d69259f82242ac317f"],"repoTags":["localhost/minikube-local-cache-test:functional-567541"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha25
6:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":["docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7","docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"196149140"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repo
Digests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"6327322
7"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917
a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c
6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-567541"],"size":"4943877"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567541 image ls --format json --alsologtostderr:
I0210 10:44:33.794192  125346 out.go:345] Setting OutFile to fd 1 ...
I0210 10:44:33.794290  125346 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.794295  125346 out.go:358] Setting ErrFile to fd 2...
I0210 10:44:33.794300  125346 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.794499  125346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
I0210 10:44:33.795095  125346 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.795221  125346 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.795647  125346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.795719  125346 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.812159  125346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42543
I0210 10:44:33.812601  125346 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.813128  125346 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.813156  125346 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.813497  125346 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.813730  125346 main.go:141] libmachine: (functional-567541) Calling .GetState
I0210 10:44:33.815878  125346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.815928  125346 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.831012  125346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
I0210 10:44:33.831503  125346 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.831990  125346 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.832014  125346 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.832382  125346 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.832585  125346 main.go:141] libmachine: (functional-567541) Calling .DriverName
I0210 10:44:33.832788  125346 ssh_runner.go:195] Run: systemctl --version
I0210 10:44:33.832811  125346 main.go:141] libmachine: (functional-567541) Calling .GetSSHHostname
I0210 10:44:33.835796  125346 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.836189  125346 main.go:141] libmachine: (functional-567541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3f:3d", ip: ""} in network mk-functional-567541: {Iface:virbr1 ExpiryTime:2025-02-10 11:41:42 +0000 UTC Type:0 Mac:52:54:00:fe:3f:3d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-567541 Clientid:01:52:54:00:fe:3f:3d}
I0210 10:44:33.836225  125346 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined IP address 192.168.39.8 and MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.836388  125346 main.go:141] libmachine: (functional-567541) Calling .GetSSHPort
I0210 10:44:33.836566  125346 main.go:141] libmachine: (functional-567541) Calling .GetSSHKeyPath
I0210 10:44:33.836704  125346 main.go:141] libmachine: (functional-567541) Calling .GetSSHUsername
I0210 10:44:33.836833  125346 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/functional-567541/id_rsa Username:docker}
I0210 10:44:33.925256  125346 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 10:44:33.965810  125346 main.go:141] libmachine: Making call to close driver server
I0210 10:44:33.965822  125346 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:33.966068  125346 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:33.966088  125346 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:44:33.966093  125346 main.go:141] libmachine: (functional-567541) DBG | Closing plugin on server side
I0210 10:44:33.966103  125346 main.go:141] libmachine: Making call to close driver server
I0210 10:44:33.966112  125346 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:33.966337  125346 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:33.966351  125346 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:44:33.966362  125346 main.go:141] libmachine: (functional-567541) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567541 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 4f208b80a0fc4ad293e8bd47749f40ea061a9e07dacceb80267f25d3a50954c2
repoDigests:
- localhost/minikube-local-cache-test@sha256:2f6ebad45963957986698bfa064c8a0ca6db159c067428d69259f82242ac317f
repoTags:
- localhost/minikube-local-cache-test:functional-567541
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests:
- docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "196149140"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-567541
size: "4943877"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567541 image ls --format yaml --alsologtostderr:
I0210 10:44:33.480827  125266 out.go:345] Setting OutFile to fd 1 ...
I0210 10:44:33.481116  125266 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.481129  125266 out.go:358] Setting ErrFile to fd 2...
I0210 10:44:33.481133  125266 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:44:33.481755  125266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
I0210 10:44:33.482973  125266 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.483102  125266 config.go:182] Loaded profile config "functional-567541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 10:44:33.483489  125266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.483533  125266 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.498881  125266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
I0210 10:44:33.499357  125266 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.499897  125266 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.499929  125266 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.500294  125266 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.500487  125266 main.go:141] libmachine: (functional-567541) Calling .GetState
I0210 10:44:33.502524  125266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 10:44:33.502559  125266 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 10:44:33.517996  125266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
I0210 10:44:33.518544  125266 main.go:141] libmachine: () Calling .GetVersion
I0210 10:44:33.519168  125266 main.go:141] libmachine: Using API Version  1
I0210 10:44:33.519225  125266 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 10:44:33.519613  125266 main.go:141] libmachine: () Calling .GetMachineName
I0210 10:44:33.519826  125266 main.go:141] libmachine: (functional-567541) Calling .DriverName
I0210 10:44:33.520076  125266 ssh_runner.go:195] Run: systemctl --version
I0210 10:44:33.520105  125266 main.go:141] libmachine: (functional-567541) Calling .GetSSHHostname
I0210 10:44:33.522950  125266 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.523386  125266 main.go:141] libmachine: (functional-567541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3f:3d", ip: ""} in network mk-functional-567541: {Iface:virbr1 ExpiryTime:2025-02-10 11:41:42 +0000 UTC Type:0 Mac:52:54:00:fe:3f:3d Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-567541 Clientid:01:52:54:00:fe:3f:3d}
I0210 10:44:33.523417  125266 main.go:141] libmachine: (functional-567541) DBG | domain functional-567541 has defined IP address 192.168.39.8 and MAC address 52:54:00:fe:3f:3d in network mk-functional-567541
I0210 10:44:33.523482  125266 main.go:141] libmachine: (functional-567541) Calling .GetSSHPort
I0210 10:44:33.523620  125266 main.go:141] libmachine: (functional-567541) Calling .GetSSHKeyPath
I0210 10:44:33.523756  125266 main.go:141] libmachine: (functional-567541) Calling .GetSSHUsername
I0210 10:44:33.523902  125266 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/functional-567541/id_rsa Username:docker}
I0210 10:44:33.605675  125266 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 10:44:33.647900  125266 main.go:141] libmachine: Making call to close driver server
I0210 10:44:33.647921  125266 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:33.648225  125266 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:33.648240  125266 main.go:141] libmachine: (functional-567541) DBG | Closing plugin on server side
I0210 10:44:33.648242  125266 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 10:44:33.648254  125266 main.go:141] libmachine: Making call to close driver server
I0210 10:44:33.648263  125266 main.go:141] libmachine: (functional-567541) Calling .Close
I0210 10:44:33.648512  125266 main.go:141] libmachine: Successfully made call to close driver server
I0210 10:44:33.648521  125266 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.8487658s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-567541
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-567541 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-567541 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-2khgn" [10b060d0-6e16-45d5-9b3e-b3d856bd08b9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-2khgn" [10b060d0-6e16-45d5-9b3e-b3d856bd08b9] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.004233398s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image load --daemon kicbase/echo-server:functional-567541 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-567541 image load --daemon kicbase/echo-server:functional-567541 --alsologtostderr: (2.60101198s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image load --daemon kicbase/echo-server:functional-567541 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-567541
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image load --daemon kicbase/echo-server:functional-567541 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image save kicbase/echo-server:functional-567541 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image rm kicbase/echo-server:functional-567541 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-567541
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 image save --daemon kicbase/echo-server:functional-567541 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-567541
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "367.136029ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "50.207751ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "341.306278ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "49.63994ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdany-port368204095/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739184257485200791" to /tmp/TestFunctionalparallelMountCmdany-port368204095/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739184257485200791" to /tmp/TestFunctionalparallelMountCmdany-port368204095/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739184257485200791" to /tmp/TestFunctionalparallelMountCmdany-port368204095/001/test-1739184257485200791
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.171517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:44:17.745644  116470 retry.go:31] will retry after 645.990818ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 10 10:44 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 10 10:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 10 10:44 test-1739184257485200791
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh cat /mount-9p/test-1739184257485200791
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-567541 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [229c383b-f364-4090-af2a-09ed0cd1fdc8] Pending
helpers_test.go:344: "busybox-mount" [229c383b-f364-4090-af2a-09ed0cd1fdc8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [229c383b-f364-4090-af2a-09ed0cd1fdc8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [229c383b-f364-4090-af2a-09ed0cd1fdc8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002575744s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-567541 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdany-port368204095/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdspecific-port2348877673/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.610421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:44:26.909035  116470 retry.go:31] will retry after 467.774252ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdspecific-port2348877673/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567541 ssh "sudo umount -f /mount-9p": exit status 1 (281.353594ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-567541 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdspecific-port2348877673/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 service list -o json
functional_test.go:1511: Took "959.396972ms" to run "out/minikube-linux-amd64 -p functional-567541 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709055760/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709055760/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709055760/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-567541 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709055760/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709055760/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709055760/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.8:30764
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.8:30764
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-567541 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-567541
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-567541
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-567541
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-955965 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 10:45:53.024802  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:46:20.735346  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-955965 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m10.085472428s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (190.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-955965 -- rollout status deployment/busybox: (5.226585991s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-488p4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-5g48b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-tgpsj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-488p4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-5g48b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-tgpsj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-488p4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-5g48b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-tgpsj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-488p4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-488p4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-5g48b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-5g48b -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-tgpsj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-955965 -- exec busybox-58667487b6-tgpsj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-955965 -v=7 --alsologtostderr
E0210 10:49:06.276401  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:06.282775  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:06.294154  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:06.316009  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:06.357432  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:06.438908  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:06.600465  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:06.921806  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:07.563540  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:08.845558  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:49:11.406882  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-955965 -v=7 --alsologtostderr: (59.253805927s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-955965 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp testdata/cp-test.txt ha-955965:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1693363693/001/cp-test_ha-955965.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test.txt"
E0210 10:49:16.529228  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965:/home/docker/cp-test.txt ha-955965-m02:/home/docker/cp-test_ha-955965_ha-955965-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test_ha-955965_ha-955965-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965:/home/docker/cp-test.txt ha-955965-m03:/home/docker/cp-test_ha-955965_ha-955965-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test_ha-955965_ha-955965-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965:/home/docker/cp-test.txt ha-955965-m04:/home/docker/cp-test_ha-955965_ha-955965-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test_ha-955965_ha-955965-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp testdata/cp-test.txt ha-955965-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1693363693/001/cp-test_ha-955965-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m02:/home/docker/cp-test.txt ha-955965:/home/docker/cp-test_ha-955965-m02_ha-955965.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test_ha-955965-m02_ha-955965.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m02:/home/docker/cp-test.txt ha-955965-m03:/home/docker/cp-test_ha-955965-m02_ha-955965-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test_ha-955965-m02_ha-955965-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m02:/home/docker/cp-test.txt ha-955965-m04:/home/docker/cp-test_ha-955965-m02_ha-955965-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test_ha-955965-m02_ha-955965-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp testdata/cp-test.txt ha-955965-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1693363693/001/cp-test_ha-955965-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m03:/home/docker/cp-test.txt ha-955965:/home/docker/cp-test_ha-955965-m03_ha-955965.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test_ha-955965-m03_ha-955965.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m03:/home/docker/cp-test.txt ha-955965-m02:/home/docker/cp-test_ha-955965-m03_ha-955965-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test_ha-955965-m03_ha-955965-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m03:/home/docker/cp-test.txt ha-955965-m04:/home/docker/cp-test_ha-955965-m03_ha-955965-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test_ha-955965-m03_ha-955965-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp testdata/cp-test.txt ha-955965-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1693363693/001/cp-test_ha-955965-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m04:/home/docker/cp-test.txt ha-955965:/home/docker/cp-test_ha-955965-m04_ha-955965.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965 "sudo cat /home/docker/cp-test_ha-955965-m04_ha-955965.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m04:/home/docker/cp-test.txt ha-955965-m02:/home/docker/cp-test_ha-955965-m04_ha-955965-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m02 "sudo cat /home/docker/cp-test_ha-955965-m04_ha-955965-m02.txt"
E0210 10:49:26.770910  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 cp ha-955965-m04:/home/docker/cp-test.txt ha-955965-m03:/home/docker/cp-test_ha-955965-m04_ha-955965-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 ssh -n ha-955965-m03 "sudo cat /home/docker/cp-test_ha-955965-m04_ha-955965-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 node stop m02 -v=7 --alsologtostderr
E0210 10:49:47.252376  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:50:28.214641  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:50:53.023387  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-955965 node stop m02 -v=7 --alsologtostderr: (1m30.964591373s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr: exit status 7 (649.602244ms)

                                                
                                                
-- stdout --
	ha-955965
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-955965-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-955965-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-955965-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:50:58.694465  130197 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:50:58.694566  130197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:50:58.694571  130197 out.go:358] Setting ErrFile to fd 2...
	I0210 10:50:58.694582  130197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:50:58.694780  130197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 10:50:58.694954  130197 out.go:352] Setting JSON to false
	I0210 10:50:58.694978  130197 mustload.go:65] Loading cluster: ha-955965
	I0210 10:50:58.695090  130197 notify.go:220] Checking for updates...
	I0210 10:50:58.695370  130197 config.go:182] Loaded profile config "ha-955965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 10:50:58.695393  130197 status.go:174] checking status of ha-955965 ...
	I0210 10:50:58.695800  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:58.695852  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:58.715372  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41351
	I0210 10:50:58.715872  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:58.716460  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:58.716491  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:58.716827  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:58.717067  130197 main.go:141] libmachine: (ha-955965) Calling .GetState
	I0210 10:50:58.718780  130197 status.go:371] ha-955965 host status = "Running" (err=<nil>)
	I0210 10:50:58.718801  130197 host.go:66] Checking if "ha-955965" exists ...
	I0210 10:50:58.719141  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:58.719180  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:58.735237  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I0210 10:50:58.735630  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:58.736145  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:58.736172  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:58.736465  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:58.736666  130197 main.go:141] libmachine: (ha-955965) Calling .GetIP
	I0210 10:50:58.739398  130197 main.go:141] libmachine: (ha-955965) DBG | domain ha-955965 has defined MAC address 52:54:00:0c:1d:5e in network mk-ha-955965
	I0210 10:50:58.739836  130197 main.go:141] libmachine: (ha-955965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:1d:5e", ip: ""} in network mk-ha-955965: {Iface:virbr1 ExpiryTime:2025-02-10 11:45:09 +0000 UTC Type:0 Mac:52:54:00:0c:1d:5e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-955965 Clientid:01:52:54:00:0c:1d:5e}
	I0210 10:50:58.739857  130197 main.go:141] libmachine: (ha-955965) DBG | domain ha-955965 has defined IP address 192.168.39.253 and MAC address 52:54:00:0c:1d:5e in network mk-ha-955965
	I0210 10:50:58.739948  130197 host.go:66] Checking if "ha-955965" exists ...
	I0210 10:50:58.740281  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:58.740319  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:58.759711  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
	I0210 10:50:58.760193  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:58.760797  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:58.760828  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:58.761246  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:58.761482  130197 main.go:141] libmachine: (ha-955965) Calling .DriverName
	I0210 10:50:58.761717  130197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:50:58.761764  130197 main.go:141] libmachine: (ha-955965) Calling .GetSSHHostname
	I0210 10:50:58.765380  130197 main.go:141] libmachine: (ha-955965) DBG | domain ha-955965 has defined MAC address 52:54:00:0c:1d:5e in network mk-ha-955965
	I0210 10:50:58.765847  130197 main.go:141] libmachine: (ha-955965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:1d:5e", ip: ""} in network mk-ha-955965: {Iface:virbr1 ExpiryTime:2025-02-10 11:45:09 +0000 UTC Type:0 Mac:52:54:00:0c:1d:5e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-955965 Clientid:01:52:54:00:0c:1d:5e}
	I0210 10:50:58.765879  130197 main.go:141] libmachine: (ha-955965) DBG | domain ha-955965 has defined IP address 192.168.39.253 and MAC address 52:54:00:0c:1d:5e in network mk-ha-955965
	I0210 10:50:58.766235  130197 main.go:141] libmachine: (ha-955965) Calling .GetSSHPort
	I0210 10:50:58.766415  130197 main.go:141] libmachine: (ha-955965) Calling .GetSSHKeyPath
	I0210 10:50:58.766572  130197 main.go:141] libmachine: (ha-955965) Calling .GetSSHUsername
	I0210 10:50:58.766675  130197 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/ha-955965/id_rsa Username:docker}
	I0210 10:50:58.849197  130197 ssh_runner.go:195] Run: systemctl --version
	I0210 10:50:58.856297  130197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:50:58.872746  130197 kubeconfig.go:125] found "ha-955965" server: "https://192.168.39.254:8443"
	I0210 10:50:58.872784  130197 api_server.go:166] Checking apiserver status ...
	I0210 10:50:58.872814  130197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:50:58.886316  130197 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup
	W0210 10:50:58.895955  130197 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 10:50:58.896010  130197 ssh_runner.go:195] Run: ls
	I0210 10:50:58.900257  130197 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 10:50:58.906084  130197 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 10:50:58.906112  130197 status.go:463] ha-955965 apiserver status = Running (err=<nil>)
	I0210 10:50:58.906123  130197 status.go:176] ha-955965 status: &{Name:ha-955965 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:50:58.906166  130197 status.go:174] checking status of ha-955965-m02 ...
	I0210 10:50:58.906608  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:58.906658  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:58.922162  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0210 10:50:58.922720  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:58.923351  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:58.923380  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:58.923753  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:58.923954  130197 main.go:141] libmachine: (ha-955965-m02) Calling .GetState
	I0210 10:50:58.925648  130197 status.go:371] ha-955965-m02 host status = "Stopped" (err=<nil>)
	I0210 10:50:58.925665  130197 status.go:384] host is not running, skipping remaining checks
	I0210 10:50:58.925672  130197 status.go:176] ha-955965-m02 status: &{Name:ha-955965-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:50:58.925695  130197 status.go:174] checking status of ha-955965-m03 ...
	I0210 10:50:58.926127  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:58.926182  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:58.942645  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38779
	I0210 10:50:58.943087  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:58.943649  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:58.943675  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:58.944056  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:58.944228  130197 main.go:141] libmachine: (ha-955965-m03) Calling .GetState
	I0210 10:50:58.945922  130197 status.go:371] ha-955965-m03 host status = "Running" (err=<nil>)
	I0210 10:50:58.945941  130197 host.go:66] Checking if "ha-955965-m03" exists ...
	I0210 10:50:58.946227  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:58.946267  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:58.964213  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0210 10:50:58.964670  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:58.965128  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:58.965156  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:58.965543  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:58.965750  130197 main.go:141] libmachine: (ha-955965-m03) Calling .GetIP
	I0210 10:50:58.968260  130197 main.go:141] libmachine: (ha-955965-m03) DBG | domain ha-955965-m03 has defined MAC address 52:54:00:6d:31:54 in network mk-ha-955965
	I0210 10:50:58.968682  130197 main.go:141] libmachine: (ha-955965-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:31:54", ip: ""} in network mk-ha-955965: {Iface:virbr1 ExpiryTime:2025-02-10 11:47:06 +0000 UTC Type:0 Mac:52:54:00:6d:31:54 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-955965-m03 Clientid:01:52:54:00:6d:31:54}
	I0210 10:50:58.968715  130197 main.go:141] libmachine: (ha-955965-m03) DBG | domain ha-955965-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:6d:31:54 in network mk-ha-955965
	I0210 10:50:58.968835  130197 host.go:66] Checking if "ha-955965-m03" exists ...
	I0210 10:50:58.969107  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:58.969152  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:58.984361  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0210 10:50:58.984749  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:58.985179  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:58.985202  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:58.985486  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:58.985701  130197 main.go:141] libmachine: (ha-955965-m03) Calling .DriverName
	I0210 10:50:58.985899  130197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:50:58.985929  130197 main.go:141] libmachine: (ha-955965-m03) Calling .GetSSHHostname
	I0210 10:50:58.988516  130197 main.go:141] libmachine: (ha-955965-m03) DBG | domain ha-955965-m03 has defined MAC address 52:54:00:6d:31:54 in network mk-ha-955965
	I0210 10:50:58.988969  130197 main.go:141] libmachine: (ha-955965-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:31:54", ip: ""} in network mk-ha-955965: {Iface:virbr1 ExpiryTime:2025-02-10 11:47:06 +0000 UTC Type:0 Mac:52:54:00:6d:31:54 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-955965-m03 Clientid:01:52:54:00:6d:31:54}
	I0210 10:50:58.988990  130197 main.go:141] libmachine: (ha-955965-m03) DBG | domain ha-955965-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:6d:31:54 in network mk-ha-955965
	I0210 10:50:58.989135  130197 main.go:141] libmachine: (ha-955965-m03) Calling .GetSSHPort
	I0210 10:50:58.989315  130197 main.go:141] libmachine: (ha-955965-m03) Calling .GetSSHKeyPath
	I0210 10:50:58.989486  130197 main.go:141] libmachine: (ha-955965-m03) Calling .GetSSHUsername
	I0210 10:50:58.989626  130197 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/ha-955965-m03/id_rsa Username:docker}
	I0210 10:50:59.071163  130197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:50:59.089639  130197 kubeconfig.go:125] found "ha-955965" server: "https://192.168.39.254:8443"
	I0210 10:50:59.089676  130197 api_server.go:166] Checking apiserver status ...
	I0210 10:50:59.089718  130197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:50:59.103747  130197 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1468/cgroup
	W0210 10:50:59.112208  130197 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1468/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 10:50:59.112268  130197 ssh_runner.go:195] Run: ls
	I0210 10:50:59.116535  130197 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 10:50:59.121478  130197 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 10:50:59.121498  130197 status.go:463] ha-955965-m03 apiserver status = Running (err=<nil>)
	I0210 10:50:59.121506  130197 status.go:176] ha-955965-m03 status: &{Name:ha-955965-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:50:59.121527  130197 status.go:174] checking status of ha-955965-m04 ...
	I0210 10:50:59.121795  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:59.121829  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:59.137162  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0210 10:50:59.137640  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:59.138172  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:59.138197  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:59.138485  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:59.138654  130197 main.go:141] libmachine: (ha-955965-m04) Calling .GetState
	I0210 10:50:59.139968  130197 status.go:371] ha-955965-m04 host status = "Running" (err=<nil>)
	I0210 10:50:59.139988  130197 host.go:66] Checking if "ha-955965-m04" exists ...
	I0210 10:50:59.140312  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:59.140348  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:59.156290  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0210 10:50:59.156636  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:59.157044  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:59.157066  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:59.157355  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:59.157560  130197 main.go:141] libmachine: (ha-955965-m04) Calling .GetIP
	I0210 10:50:59.160747  130197 main.go:141] libmachine: (ha-955965-m04) DBG | domain ha-955965-m04 has defined MAC address 52:54:00:38:8b:83 in network mk-ha-955965
	I0210 10:50:59.161233  130197 main.go:141] libmachine: (ha-955965-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8b:83", ip: ""} in network mk-ha-955965: {Iface:virbr1 ExpiryTime:2025-02-10 11:48:29 +0000 UTC Type:0 Mac:52:54:00:38:8b:83 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-955965-m04 Clientid:01:52:54:00:38:8b:83}
	I0210 10:50:59.161270  130197 main.go:141] libmachine: (ha-955965-m04) DBG | domain ha-955965-m04 has defined IP address 192.168.39.160 and MAC address 52:54:00:38:8b:83 in network mk-ha-955965
	I0210 10:50:59.161454  130197 host.go:66] Checking if "ha-955965-m04" exists ...
	I0210 10:50:59.161729  130197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 10:50:59.161769  130197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 10:50:59.184469  130197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0210 10:50:59.184819  130197 main.go:141] libmachine: () Calling .GetVersion
	I0210 10:50:59.185263  130197 main.go:141] libmachine: Using API Version  1
	I0210 10:50:59.185289  130197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 10:50:59.185589  130197 main.go:141] libmachine: () Calling .GetMachineName
	I0210 10:50:59.185749  130197 main.go:141] libmachine: (ha-955965-m04) Calling .DriverName
	I0210 10:50:59.185901  130197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:50:59.185925  130197 main.go:141] libmachine: (ha-955965-m04) Calling .GetSSHHostname
	I0210 10:50:59.188198  130197 main.go:141] libmachine: (ha-955965-m04) DBG | domain ha-955965-m04 has defined MAC address 52:54:00:38:8b:83 in network mk-ha-955965
	I0210 10:50:59.188638  130197 main.go:141] libmachine: (ha-955965-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8b:83", ip: ""} in network mk-ha-955965: {Iface:virbr1 ExpiryTime:2025-02-10 11:48:29 +0000 UTC Type:0 Mac:52:54:00:38:8b:83 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-955965-m04 Clientid:01:52:54:00:38:8b:83}
	I0210 10:50:59.188655  130197 main.go:141] libmachine: (ha-955965-m04) DBG | domain ha-955965-m04 has defined IP address 192.168.39.160 and MAC address 52:54:00:38:8b:83 in network mk-ha-955965
	I0210 10:50:59.188848  130197 main.go:141] libmachine: (ha-955965-m04) Calling .GetSSHPort
	I0210 10:50:59.189016  130197 main.go:141] libmachine: (ha-955965-m04) Calling .GetSSHKeyPath
	I0210 10:50:59.189142  130197 main.go:141] libmachine: (ha-955965-m04) Calling .GetSSHUsername
	I0210 10:50:59.189258  130197 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/ha-955965-m04/id_rsa Username:docker}
	I0210 10:50:59.275729  130197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:50:59.292149  130197 status.go:176] ha-955965-m04 status: &{Name:ha-955965-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-955965 node start m02 -v=7 --alsologtostderr: (49.383799012s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
E0210 10:51:50.136423  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (50.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (428.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-955965 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-955965 -v=7 --alsologtostderr
E0210 10:54:06.276569  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:54:33.978140  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:55:53.023702  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-955965 -v=7 --alsologtostderr: (4m33.958719212s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-955965 --wait=true -v=7 --alsologtostderr
E0210 10:57:16.096841  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-955965 --wait=true -v=7 --alsologtostderr: (2m34.033756597s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-955965
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (428.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 node delete m03 -v=7 --alsologtostderr
E0210 10:59:06.277198  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-955965 node delete m03 -v=7 --alsologtostderr: (17.302922957s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 stop -v=7 --alsologtostderr
E0210 11:00:53.023586  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-955965 stop -v=7 --alsologtostderr: (4m32.787793515s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr: exit status 7 (116.311285ms)

                                                
                                                
-- stdout --
	ha-955965
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-955965-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-955965-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:03:50.692125  134415 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:03:50.692359  134415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:03:50.692367  134415 out.go:358] Setting ErrFile to fd 2...
	I0210 11:03:50.692372  134415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:03:50.692622  134415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:03:50.692905  134415 out.go:352] Setting JSON to false
	I0210 11:03:50.692978  134415 mustload.go:65] Loading cluster: ha-955965
	I0210 11:03:50.693073  134415 notify.go:220] Checking for updates...
	I0210 11:03:50.694082  134415 config.go:182] Loaded profile config "ha-955965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:03:50.694109  134415 status.go:174] checking status of ha-955965 ...
	I0210 11:03:50.694538  134415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:03:50.694616  134415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:03:50.714896  134415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0210 11:03:50.715343  134415 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:03:50.715950  134415 main.go:141] libmachine: Using API Version  1
	I0210 11:03:50.715978  134415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:03:50.716390  134415 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:03:50.716615  134415 main.go:141] libmachine: (ha-955965) Calling .GetState
	I0210 11:03:50.718307  134415 status.go:371] ha-955965 host status = "Stopped" (err=<nil>)
	I0210 11:03:50.718323  134415 status.go:384] host is not running, skipping remaining checks
	I0210 11:03:50.718330  134415 status.go:176] ha-955965 status: &{Name:ha-955965 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:03:50.718371  134415 status.go:174] checking status of ha-955965-m02 ...
	I0210 11:03:50.718649  134415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:03:50.718682  134415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:03:50.733192  134415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37015
	I0210 11:03:50.733615  134415 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:03:50.734143  134415 main.go:141] libmachine: Using API Version  1
	I0210 11:03:50.734172  134415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:03:50.734484  134415 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:03:50.734669  134415 main.go:141] libmachine: (ha-955965-m02) Calling .GetState
	I0210 11:03:50.736052  134415 status.go:371] ha-955965-m02 host status = "Stopped" (err=<nil>)
	I0210 11:03:50.736070  134415 status.go:384] host is not running, skipping remaining checks
	I0210 11:03:50.736076  134415 status.go:176] ha-955965-m02 status: &{Name:ha-955965-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:03:50.736096  134415 status.go:174] checking status of ha-955965-m04 ...
	I0210 11:03:50.736397  134415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:03:50.736443  134415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:03:50.750632  134415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I0210 11:03:50.751059  134415 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:03:50.751593  134415 main.go:141] libmachine: Using API Version  1
	I0210 11:03:50.751615  134415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:03:50.751899  134415 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:03:50.752057  134415 main.go:141] libmachine: (ha-955965-m04) Calling .GetState
	I0210 11:03:50.753609  134415 status.go:371] ha-955965-m04 host status = "Stopped" (err=<nil>)
	I0210 11:03:50.753620  134415 status.go:384] host is not running, skipping remaining checks
	I0210 11:03:50.753625  134415 status.go:176] ha-955965-m04 status: &{Name:ha-955965-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (110.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-955965 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 11:04:06.278487  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:05:29.340480  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-955965 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.154222265s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (110.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-955965 --control-plane -v=7 --alsologtostderr
E0210 11:05:53.023104  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-955965 --control-plane -v=7 --alsologtostderr: (1m19.783828894s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-955965 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-851713 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-851713 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.946067671s)
--- PASS: TestJSONOutput/start/Command (53.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-851713 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-851713 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-851713 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-851713 --output=json --user=testUser: (7.346936133s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-934120 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-934120 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.788541ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36f1e0b5-c065-4fe6-aa58-648ae0931a36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-934120] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f9ab075-1bfb-40d2-8882-d22ff9b35df9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20385"}}
	{"specversion":"1.0","id":"b673b161-91c2-4e1a-942d-2cd3d0f5b5e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4e96e0d-7208-4a51-97c9-d9f4c4221006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig"}}
	{"specversion":"1.0","id":"9acbe547-15ba-4b2a-be88-b737e50c94be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube"}}
	{"specversion":"1.0","id":"fe96d6e6-fe8e-4fd7-86ac-6f50b447aabf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7fef6ee4-f8f6-4089-a84b-8ec0d8dc5400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"debdeecf-89d0-4b99-95ec-f48271162dbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-934120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-934120
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-001829 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-001829 --driver=kvm2  --container-runtime=crio: (45.079008385s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-016013 --driver=kvm2  --container-runtime=crio
E0210 11:09:06.276377  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-016013 --driver=kvm2  --container-runtime=crio: (45.165931431s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-001829
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-016013
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-016013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-016013
helpers_test.go:175: Cleaning up "first-001829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-001829
--- PASS: TestMinikubeProfile (93.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-944872 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-944872 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.23760178s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-944872 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-944872 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-964994 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-964994 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.167564351s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964994 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964994 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-944872 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964994 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964994 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-964994
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-964994: (1.273030967s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-964994
E0210 11:10:53.027457  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-964994: (20.995499692s)
--- PASS: TestMountStart/serial/RestartStopped (22.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964994 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964994 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-646190 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-646190 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.935229067s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-646190 -- rollout status deployment/busybox: (4.368094264s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-cl8g9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-vxpnb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-cl8g9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-vxpnb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-cl8g9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-vxpnb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-cl8g9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-cl8g9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-vxpnb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-646190 -- exec busybox-58667487b6-vxpnb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-646190 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-646190 -v 3 --alsologtostderr: (51.455207331s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr
E0210 11:13:56.098989  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/AddNode (52.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-646190 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp testdata/cp-test.txt multinode-646190:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3810951027/001/cp-test_multinode-646190.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190:/home/docker/cp-test.txt multinode-646190-m02:/home/docker/cp-test_multinode-646190_multinode-646190-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m02 "sudo cat /home/docker/cp-test_multinode-646190_multinode-646190-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190:/home/docker/cp-test.txt multinode-646190-m03:/home/docker/cp-test_multinode-646190_multinode-646190-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m03 "sudo cat /home/docker/cp-test_multinode-646190_multinode-646190-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp testdata/cp-test.txt multinode-646190-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3810951027/001/cp-test_multinode-646190-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190-m02:/home/docker/cp-test.txt multinode-646190:/home/docker/cp-test_multinode-646190-m02_multinode-646190.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190 "sudo cat /home/docker/cp-test_multinode-646190-m02_multinode-646190.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190-m02:/home/docker/cp-test.txt multinode-646190-m03:/home/docker/cp-test_multinode-646190-m02_multinode-646190-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m03 "sudo cat /home/docker/cp-test_multinode-646190-m02_multinode-646190-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp testdata/cp-test.txt multinode-646190-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3810951027/001/cp-test_multinode-646190-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190-m03:/home/docker/cp-test.txt multinode-646190:/home/docker/cp-test_multinode-646190-m03_multinode-646190.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190 "sudo cat /home/docker/cp-test_multinode-646190-m03_multinode-646190.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 cp multinode-646190-m03:/home/docker/cp-test.txt multinode-646190-m02:/home/docker/cp-test_multinode-646190-m03_multinode-646190-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 ssh -n multinode-646190-m02 "sudo cat /home/docker/cp-test_multinode-646190-m03_multinode-646190-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-646190 node stop m03: (1.437370497s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status
E0210 11:14:06.276362  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-646190 status: exit status 7 (441.861848ms)

                                                
                                                
-- stdout --
	multinode-646190
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-646190-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-646190-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr: exit status 7 (429.1097ms)

                                                
                                                
-- stdout --
	multinode-646190
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-646190-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-646190-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:14:06.403454  142079 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:14:06.403574  142079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:14:06.403583  142079 out.go:358] Setting ErrFile to fd 2...
	I0210 11:14:06.403586  142079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:14:06.403780  142079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:14:06.403927  142079 out.go:352] Setting JSON to false
	I0210 11:14:06.403958  142079 mustload.go:65] Loading cluster: multinode-646190
	I0210 11:14:06.404062  142079 notify.go:220] Checking for updates...
	I0210 11:14:06.404334  142079 config.go:182] Loaded profile config "multinode-646190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:14:06.404353  142079 status.go:174] checking status of multinode-646190 ...
	I0210 11:14:06.404757  142079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:14:06.404802  142079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:14:06.421236  142079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0210 11:14:06.421670  142079 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:14:06.422282  142079 main.go:141] libmachine: Using API Version  1
	I0210 11:14:06.422309  142079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:14:06.422685  142079 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:14:06.422906  142079 main.go:141] libmachine: (multinode-646190) Calling .GetState
	I0210 11:14:06.424628  142079 status.go:371] multinode-646190 host status = "Running" (err=<nil>)
	I0210 11:14:06.424649  142079 host.go:66] Checking if "multinode-646190" exists ...
	I0210 11:14:06.425188  142079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:14:06.425235  142079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:14:06.440464  142079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0210 11:14:06.440847  142079 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:14:06.441411  142079 main.go:141] libmachine: Using API Version  1
	I0210 11:14:06.441446  142079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:14:06.441754  142079 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:14:06.441933  142079 main.go:141] libmachine: (multinode-646190) Calling .GetIP
	I0210 11:14:06.444894  142079 main.go:141] libmachine: (multinode-646190) DBG | domain multinode-646190 has defined MAC address 52:54:00:a2:56:44 in network mk-multinode-646190
	I0210 11:14:06.445422  142079 main.go:141] libmachine: (multinode-646190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:56:44", ip: ""} in network mk-multinode-646190: {Iface:virbr1 ExpiryTime:2025-02-10 12:11:19 +0000 UTC Type:0 Mac:52:54:00:a2:56:44 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-646190 Clientid:01:52:54:00:a2:56:44}
	I0210 11:14:06.445451  142079 main.go:141] libmachine: (multinode-646190) DBG | domain multinode-646190 has defined IP address 192.168.39.68 and MAC address 52:54:00:a2:56:44 in network mk-multinode-646190
	I0210 11:14:06.445567  142079 host.go:66] Checking if "multinode-646190" exists ...
	I0210 11:14:06.445855  142079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:14:06.445892  142079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:14:06.460965  142079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0210 11:14:06.461334  142079 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:14:06.461772  142079 main.go:141] libmachine: Using API Version  1
	I0210 11:14:06.461792  142079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:14:06.462079  142079 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:14:06.462286  142079 main.go:141] libmachine: (multinode-646190) Calling .DriverName
	I0210 11:14:06.462465  142079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 11:14:06.462503  142079 main.go:141] libmachine: (multinode-646190) Calling .GetSSHHostname
	I0210 11:14:06.465080  142079 main.go:141] libmachine: (multinode-646190) DBG | domain multinode-646190 has defined MAC address 52:54:00:a2:56:44 in network mk-multinode-646190
	I0210 11:14:06.465475  142079 main.go:141] libmachine: (multinode-646190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:56:44", ip: ""} in network mk-multinode-646190: {Iface:virbr1 ExpiryTime:2025-02-10 12:11:19 +0000 UTC Type:0 Mac:52:54:00:a2:56:44 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-646190 Clientid:01:52:54:00:a2:56:44}
	I0210 11:14:06.465504  142079 main.go:141] libmachine: (multinode-646190) DBG | domain multinode-646190 has defined IP address 192.168.39.68 and MAC address 52:54:00:a2:56:44 in network mk-multinode-646190
	I0210 11:14:06.465656  142079 main.go:141] libmachine: (multinode-646190) Calling .GetSSHPort
	I0210 11:14:06.465854  142079 main.go:141] libmachine: (multinode-646190) Calling .GetSSHKeyPath
	I0210 11:14:06.466006  142079 main.go:141] libmachine: (multinode-646190) Calling .GetSSHUsername
	I0210 11:14:06.466140  142079 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/multinode-646190/id_rsa Username:docker}
	I0210 11:14:06.548577  142079 ssh_runner.go:195] Run: systemctl --version
	I0210 11:14:06.555250  142079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:14:06.569470  142079 kubeconfig.go:125] found "multinode-646190" server: "https://192.168.39.68:8443"
	I0210 11:14:06.569507  142079 api_server.go:166] Checking apiserver status ...
	I0210 11:14:06.569542  142079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:14:06.582040  142079 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1065/cgroup
	W0210 11:14:06.592915  142079 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1065/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:14:06.592967  142079 ssh_runner.go:195] Run: ls
	I0210 11:14:06.598589  142079 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0210 11:14:06.604414  142079 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0210 11:14:06.604440  142079 status.go:463] multinode-646190 apiserver status = Running (err=<nil>)
	I0210 11:14:06.604450  142079 status.go:176] multinode-646190 status: &{Name:multinode-646190 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:14:06.604468  142079 status.go:174] checking status of multinode-646190-m02 ...
	I0210 11:14:06.604747  142079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:14:06.604787  142079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:14:06.620558  142079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0210 11:14:06.621000  142079 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:14:06.621543  142079 main.go:141] libmachine: Using API Version  1
	I0210 11:14:06.621569  142079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:14:06.621924  142079 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:14:06.622122  142079 main.go:141] libmachine: (multinode-646190-m02) Calling .GetState
	I0210 11:14:06.623650  142079 status.go:371] multinode-646190-m02 host status = "Running" (err=<nil>)
	I0210 11:14:06.623670  142079 host.go:66] Checking if "multinode-646190-m02" exists ...
	I0210 11:14:06.623954  142079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:14:06.623998  142079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:14:06.638940  142079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37343
	I0210 11:14:06.639410  142079 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:14:06.639834  142079 main.go:141] libmachine: Using API Version  1
	I0210 11:14:06.639854  142079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:14:06.640203  142079 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:14:06.640422  142079 main.go:141] libmachine: (multinode-646190-m02) Calling .GetIP
	I0210 11:14:06.643242  142079 main.go:141] libmachine: (multinode-646190-m02) DBG | domain multinode-646190-m02 has defined MAC address 52:54:00:b1:3d:ae in network mk-multinode-646190
	I0210 11:14:06.643779  142079 main.go:141] libmachine: (multinode-646190-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:3d:ae", ip: ""} in network mk-multinode-646190: {Iface:virbr1 ExpiryTime:2025-02-10 12:12:21 +0000 UTC Type:0 Mac:52:54:00:b1:3d:ae Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-646190-m02 Clientid:01:52:54:00:b1:3d:ae}
	I0210 11:14:06.643814  142079 main.go:141] libmachine: (multinode-646190-m02) DBG | domain multinode-646190-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:b1:3d:ae in network mk-multinode-646190
	I0210 11:14:06.643961  142079 host.go:66] Checking if "multinode-646190-m02" exists ...
	I0210 11:14:06.644362  142079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:14:06.644406  142079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:14:06.660079  142079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38749
	I0210 11:14:06.660552  142079 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:14:06.661071  142079 main.go:141] libmachine: Using API Version  1
	I0210 11:14:06.661100  142079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:14:06.661439  142079 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:14:06.661639  142079 main.go:141] libmachine: (multinode-646190-m02) Calling .DriverName
	I0210 11:14:06.661848  142079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 11:14:06.661872  142079 main.go:141] libmachine: (multinode-646190-m02) Calling .GetSSHHostname
	I0210 11:14:06.664347  142079 main.go:141] libmachine: (multinode-646190-m02) DBG | domain multinode-646190-m02 has defined MAC address 52:54:00:b1:3d:ae in network mk-multinode-646190
	I0210 11:14:06.664756  142079 main.go:141] libmachine: (multinode-646190-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:3d:ae", ip: ""} in network mk-multinode-646190: {Iface:virbr1 ExpiryTime:2025-02-10 12:12:21 +0000 UTC Type:0 Mac:52:54:00:b1:3d:ae Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-646190-m02 Clientid:01:52:54:00:b1:3d:ae}
	I0210 11:14:06.664778  142079 main.go:141] libmachine: (multinode-646190-m02) DBG | domain multinode-646190-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:b1:3d:ae in network mk-multinode-646190
	I0210 11:14:06.664949  142079 main.go:141] libmachine: (multinode-646190-m02) Calling .GetSSHPort
	I0210 11:14:06.665133  142079 main.go:141] libmachine: (multinode-646190-m02) Calling .GetSSHKeyPath
	I0210 11:14:06.665284  142079 main.go:141] libmachine: (multinode-646190-m02) Calling .GetSSHUsername
	I0210 11:14:06.665418  142079 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20385-109271/.minikube/machines/multinode-646190-m02/id_rsa Username:docker}
	I0210 11:14:06.749965  142079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:14:06.764091  142079 status.go:176] multinode-646190-m02 status: &{Name:multinode-646190-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:14:06.764148  142079 status.go:174] checking status of multinode-646190-m03 ...
	I0210 11:14:06.764584  142079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:14:06.764641  142079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:14:06.780858  142079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I0210 11:14:06.781277  142079 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:14:06.781803  142079 main.go:141] libmachine: Using API Version  1
	I0210 11:14:06.781830  142079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:14:06.782180  142079 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:14:06.782387  142079 main.go:141] libmachine: (multinode-646190-m03) Calling .GetState
	I0210 11:14:06.784167  142079 status.go:371] multinode-646190-m03 host status = "Stopped" (err=<nil>)
	I0210 11:14:06.784204  142079 status.go:384] host is not running, skipping remaining checks
	I0210 11:14:06.784210  142079 status.go:176] multinode-646190-m03 status: &{Name:multinode-646190-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-646190 node start m03 -v=7 --alsologtostderr: (38.726858251s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (347.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-646190
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-646190
E0210 11:15:53.028363  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-646190: (3m2.836327705s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-646190 --wait=true -v=8 --alsologtostderr
E0210 11:19:06.277406  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-646190 --wait=true -v=8 --alsologtostderr: (2m45.033450642s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-646190
--- PASS: TestMultiNode/serial/RestartKeepsNodes (347.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-646190 node delete m03: (2.204401646s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 stop
E0210 11:20:53.022844  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:09.342991  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-646190 stop: (3m1.888645795s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-646190 status: exit status 7 (85.971238ms)

                                                
                                                
-- stdout --
	multinode-646190
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-646190-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr: exit status 7 (84.11898ms)

                                                
                                                
-- stdout --
	multinode-646190
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-646190-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:23:38.913280  145618 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:23:38.913522  145618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:23:38.913530  145618 out.go:358] Setting ErrFile to fd 2...
	I0210 11:23:38.913535  145618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:23:38.913739  145618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:23:38.913941  145618 out.go:352] Setting JSON to false
	I0210 11:23:38.913970  145618 mustload.go:65] Loading cluster: multinode-646190
	I0210 11:23:38.914014  145618 notify.go:220] Checking for updates...
	I0210 11:23:38.914342  145618 config.go:182] Loaded profile config "multinode-646190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:23:38.914360  145618 status.go:174] checking status of multinode-646190 ...
	I0210 11:23:38.914755  145618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:23:38.914795  145618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:23:38.929513  145618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0210 11:23:38.929958  145618 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:23:38.930573  145618 main.go:141] libmachine: Using API Version  1
	I0210 11:23:38.930596  145618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:23:38.930898  145618 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:23:38.931055  145618 main.go:141] libmachine: (multinode-646190) Calling .GetState
	I0210 11:23:38.932386  145618 status.go:371] multinode-646190 host status = "Stopped" (err=<nil>)
	I0210 11:23:38.932405  145618 status.go:384] host is not running, skipping remaining checks
	I0210 11:23:38.932422  145618 status.go:176] multinode-646190 status: &{Name:multinode-646190 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:23:38.932451  145618 status.go:174] checking status of multinode-646190-m02 ...
	I0210 11:23:38.932763  145618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 11:23:38.932807  145618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 11:23:38.946985  145618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0210 11:23:38.947323  145618 main.go:141] libmachine: () Calling .GetVersion
	I0210 11:23:38.947807  145618 main.go:141] libmachine: Using API Version  1
	I0210 11:23:38.947831  145618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 11:23:38.948131  145618 main.go:141] libmachine: () Calling .GetMachineName
	I0210 11:23:38.948313  145618 main.go:141] libmachine: (multinode-646190-m02) Calling .GetState
	I0210 11:23:38.949767  145618 status.go:371] multinode-646190-m02 host status = "Stopped" (err=<nil>)
	I0210 11:23:38.949784  145618 status.go:384] host is not running, skipping remaining checks
	I0210 11:23:38.949789  145618 status.go:176] multinode-646190-m02 status: &{Name:multinode-646190-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-646190 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 11:24:06.277342  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-646190 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.232884598s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-646190 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-646190
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-646190-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-646190-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.846811ms)

                                                
                                                
-- stdout --
	* [multinode-646190-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-646190-m02' is duplicated with machine name 'multinode-646190-m02' in profile 'multinode-646190'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-646190-m03 --driver=kvm2  --container-runtime=crio
E0210 11:25:53.022889  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-646190-m03 --driver=kvm2  --container-runtime=crio: (42.779135999s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-646190
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-646190: exit status 80 (214.904746ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-646190 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-646190-m03 already exists in multinode-646190-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-646190-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                    
x
+
TestScheduledStopUnix (112.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-502431 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-502431 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.958927278s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-502431 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-502431 -n scheduled-stop-502431
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-502431 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0210 11:29:50.801368  116470 retry.go:31] will retry after 56.491µs: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.802555  116470 retry.go:31] will retry after 104.298µs: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.803675  116470 retry.go:31] will retry after 209.888µs: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.804817  116470 retry.go:31] will retry after 456.407µs: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.805969  116470 retry.go:31] will retry after 538.642µs: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.807104  116470 retry.go:31] will retry after 530.835µs: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.808242  116470 retry.go:31] will retry after 1.42994ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.810440  116470 retry.go:31] will retry after 1.37207ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.812644  116470 retry.go:31] will retry after 1.773149ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.814852  116470 retry.go:31] will retry after 5.522749ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.821103  116470 retry.go:31] will retry after 5.782367ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.827312  116470 retry.go:31] will retry after 6.22756ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.834535  116470 retry.go:31] will retry after 17.125148ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.852781  116470 retry.go:31] will retry after 11.860487ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.865540  116470 retry.go:31] will retry after 25.380265ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
I0210 11:29:50.891868  116470 retry.go:31] will retry after 49.330541ms: open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/scheduled-stop-502431/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-502431 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-502431 -n scheduled-stop-502431
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-502431
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-502431 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0210 11:30:36.103072  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:53.029007  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-502431
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-502431: exit status 7 (67.567041ms)

                                                
                                                
-- stdout --
	scheduled-stop-502431
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-502431 -n scheduled-stop-502431
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-502431 -n scheduled-stop-502431: exit status 7 (65.434639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-502431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-502431
--- PASS: TestScheduledStopUnix (112.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (221.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1316484845 start -p running-upgrade-593595 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1316484845 start -p running-upgrade-593595 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m1.236746116s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-593595 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-593595 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m36.372546498s)
helpers_test.go:175: Cleaning up "running-upgrade-593595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-593595
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-593595: (1.1908145s)
--- PASS: TestRunningBinaryUpgrade (221.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-460172 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-460172 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (95.738045ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-460172] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-460172 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-460172 --driver=kvm2  --container-runtime=crio: (1m36.660054996s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-460172 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (63.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-460172 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-460172 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m2.288744376s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-460172 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-460172 status -o json: exit status 2 (249.122708ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-460172","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-460172
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-460172: (1.231913864s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (63.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-460172 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-460172 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.292282145s)
--- PASS: TestNoKubernetes/serial/Start (50.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-804475 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-804475 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (103.719437ms)

                                                
                                                
-- stdout --
	* [false-804475] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:34:27.900098  152950 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:34:27.900560  152950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:34:27.900579  152950 out.go:358] Setting ErrFile to fd 2...
	I0210 11:34:27.900587  152950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:34:27.901000  152950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-109271/.minikube/bin
	I0210 11:34:27.901869  152950 out.go:352] Setting JSON to false
	I0210 11:34:27.902856  152950 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8210,"bootTime":1739179058,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 11:34:27.902958  152950 start.go:139] virtualization: kvm guest
	I0210 11:34:27.904882  152950 out.go:177] * [false-804475] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 11:34:27.906498  152950 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:34:27.906501  152950 notify.go:220] Checking for updates...
	I0210 11:34:27.908816  152950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:34:27.910064  152950 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-109271/kubeconfig
	I0210 11:34:27.911243  152950 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-109271/.minikube
	I0210 11:34:27.912319  152950 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 11:34:27.913359  152950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:34:27.914752  152950 config.go:182] Loaded profile config "NoKubernetes-460172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0210 11:34:27.914832  152950 config.go:182] Loaded profile config "cert-expiration-038969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 11:34:27.914933  152950 config.go:182] Loaded profile config "running-upgrade-593595": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0210 11:34:27.915010  152950 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:34:27.951072  152950 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 11:34:27.952254  152950 start.go:297] selected driver: kvm2
	I0210 11:34:27.952273  152950 start.go:901] validating driver "kvm2" against <nil>
	I0210 11:34:27.952300  152950 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:34:27.954479  152950 out.go:201] 
	W0210 11:34:27.955741  152950 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0210 11:34:27.956865  152950 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-804475 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-804475" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.45:8443
name: cert-expiration-038969
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.172:8443
name: running-upgrade-593595
contexts:
- context:
cluster: cert-expiration-038969
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-038969
name: cert-expiration-038969
- context:
cluster: running-upgrade-593595
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-593595
name: running-upgrade-593595
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-038969
user:
client-certificate: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/cert-expiration-038969/client.crt
client-key: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/cert-expiration-038969/client.key
- name: running-upgrade-593595
user:
client-certificate: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/running-upgrade-593595/client.crt
client-key: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/running-upgrade-593595/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-804475

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804475"

                                                
                                                
----------------------- debugLogs end: false-804475 [took: 3.072016107s] --------------------------------
helpers_test.go:175: Cleaning up "false-804475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-804475
--- PASS: TestNetworkPlugins/group/false (3.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-460172 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-460172 "sudo systemctl is-active --quiet service kubelet": exit status 1 (231.354698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.916348931s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-460172
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-460172: (1.330826213s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-460172 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-460172 --driver=kvm2  --container-runtime=crio: (46.127712194s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2807035907 start -p stopped-upgrade-622097 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2807035907 start -p stopped-upgrade-622097 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m21.644933636s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2807035907 -p stopped-upgrade-622097 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2807035907 -p stopped-upgrade-622097 stop: (2.16779049s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-622097 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-622097 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.693318433s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-460172 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-460172 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.636382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (75.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-088075 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0210 11:35:53.027380  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-088075 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m15.694546458s)
--- PASS: TestPause/serial/Start (75.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-088075 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-088075 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.627814213s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-622097
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (56.531838378s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.53s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-088075 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-088075 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-088075 --output=json --layout=cluster: exit status 2 (255.900354ms)

                                                
                                                
-- stdout --
	{"Name":"pause-088075","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-088075","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-088075 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-088075 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.9s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-088075 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m6.240831429s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.608276328s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-804475 "pgrep -a kubelet"
I0210 11:37:54.626526  116470 config.go:182] Loaded profile config "auto-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-804475 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b77h6" [a5355a51-d71e-491c-be9b-c8e803266a90] Pending
helpers_test.go:344: "netcat-5d86dc444-b77h6" [a5355a51-d71e-491c-be9b-c8e803266a90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b77h6" [a5355a51-d71e-491c-be9b-c8e803266a90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004168562s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-804475 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.980641677s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bttgz" [42be9445-2459-4338-9835-f6675e5501d7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005284575s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-804475 "pgrep -a kubelet"
I0210 11:38:38.637881  116470 config.go:182] Loaded profile config "kindnet-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-804475 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b42hr" [5737fc09-9f44-417f-8536-a6cef552595b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b42hr" [5737fc09-9f44-417f-8536-a6cef552595b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004144802s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-804475 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0210 11:39:06.277109  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (58.26497652s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-frg9l" [387cdda3-5b88-4f5e-b9f5-17c5c1d43f7d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004694393s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-804475 "pgrep -a kubelet"
I0210 11:39:22.830570  116470 config.go:182] Loaded profile config "calico-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-804475 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-srsr7" [c465c4e5-611f-446a-8612-0142c3a7b128] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-srsr7" [c465c4e5-611f-446a-8612-0142c3a7b128] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005797628s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-804475 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-804475 "pgrep -a kubelet"
I0210 11:39:35.085156  116470 config.go:182] Loaded profile config "custom-flannel-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-804475 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mqx7r" [c7101417-fa8b-4d17-9808-9e65fbe370ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-mqx7r" [c7101417-fa8b-4d17-9808-9e65fbe370ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.003596523s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-804475 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.21855042s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-804475 "pgrep -a kubelet"
I0210 11:40:04.257911  116470 config.go:182] Loaded profile config "enable-default-cni-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-804475 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-24f7m" [b443cd8e-5399-4e14-80d4-65d9548205cb] Pending
helpers_test.go:344: "netcat-5d86dc444-24f7m" [b443cd8e-5399-4e14-80d4-65d9548205cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-24f7m" [b443cd8e-5399-4e14-80d4-65d9548205cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00467124s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-804475 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m6.667940701s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-804475 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6s4bm" [bce23a23-af83-4bce-8ffc-7a2cac81ba5c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004134212s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-804475 "pgrep -a kubelet"
I0210 11:41:08.273486  116470 config.go:182] Loaded profile config "flannel-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-804475 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lqhpp" [9ce99593-fda1-4855-a332-ad5bd5e27a9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lqhpp" [9ce99593-fda1-4855-a332-ad5bd5e27a9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003452536s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-804475 "pgrep -a kubelet"
I0210 11:41:14.926057  116470 config.go:182] Loaded profile config "bridge-804475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-804475 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gsvls" [e57f94c2-1796-4d26-8668-e78140118623] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gsvls" [e57f94c2-1796-4d26-8668-e78140118623] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.009224624s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-804475 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-804475 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-804475 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-484935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-484935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m17.155021887s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-413450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-413450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m19.14053606s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-448087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-448087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m39.238089s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-484935 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd2a4358-e975-4193-aeae-a60a21b53e21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd2a4358-e975-4193-aeae-a60a21b53e21] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.238737745s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-484935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-484935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0210 11:42:54.943110  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:42:54.949538  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:42:54.960898  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:42:54.982367  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:42:55.023875  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:42:55.105343  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:42:55.267339  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:42:55.589636  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-484935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.36844896s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-484935 describe deploy/metrics-server -n kube-system
E0210 11:42:56.231614  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-413450 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b8583dba-972e-45a1-a011-0837c439d597] Pending
helpers_test.go:344: "busybox" [b8583dba-972e-45a1-a011-0837c439d597] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0210 11:42:57.513072  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:00.074540  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [b8583dba-972e-45a1-a011-0837c439d597] Running
E0210 11:43:05.195984  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004370722s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-413450 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-484935 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-484935 --alsologtostderr -v=3: (1m31.03075245s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-413450 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-413450 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-413450 --alsologtostderr -v=3
E0210 11:43:15.438091  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-413450 --alsologtostderr -v=3: (1m31.471668461s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-448087 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d35a8783-75b5-40b0-a18f-42c29a1908f7] Pending
helpers_test.go:344: "busybox" [d35a8783-75b5-40b0-a18f-42c29a1908f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d35a8783-75b5-40b0-a18f-42c29a1908f7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003742867s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-448087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-448087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0210 11:43:32.415396  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:32.422331  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:32.433835  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:32.455243  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:32.496676  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:32.578943  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:32.740759  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-448087 describe deploy/metrics-server -n kube-system
E0210 11:43:33.062875  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-448087 --alsologtostderr -v=3
E0210 11:43:33.705176  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:34.987022  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:35.920102  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:37.548833  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:42.670766  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:43:52.912805  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:06.276632  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/functional-567541/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:13.394172  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.621041  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.627462  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.638846  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.660209  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.701641  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.783170  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.882416  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/auto-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:16.944872  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:17.266683  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:17.908547  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:19.190167  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:21.752376  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:26.874251  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-448087 --alsologtostderr -v=3: (1m31.388521423s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-484935 -n no-preload-484935
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-484935 -n no-preload-484935: exit status 7 (66.31978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-484935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-484935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 11:44:35.344063  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:35.350517  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:35.361885  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:35.383262  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:35.424711  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:35.506270  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:35.668265  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:35.989990  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:36.631948  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:37.116510  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:37.913508  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-484935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m48.755915203s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-484935 -n no-preload-484935
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413450 -n embed-certs-413450
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413450 -n embed-certs-413450: exit status 7 (71.191669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-413450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (310.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-413450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 11:44:40.474872  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:45.596889  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:54.355916  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/kindnet-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:55.838809  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:44:57.598259  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/calico-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-413450 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m10.02301041s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413450 -n embed-certs-413450
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (310.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087
E0210 11:45:04.562225  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:04.568964  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087: exit status 7 (76.777943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-448087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0210 11:45:04.581031  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:04.602425  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:04.644484  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-448087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 11:45:04.725924  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:04.888134  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:05.209890  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:05.851998  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:07.133694  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:09.695432  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:14.817369  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:16.320661  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:45:25.058771  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-448087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m36.438635481s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-510006 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-510006 --alsologtostderr -v=3: (3.295348948s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-510006 -n old-k8s-version-510006: exit status 7 (71.388031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-510006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-svwtp" [a1d157b1-1c1b-45b0-9c11-6f960b37bd74] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004287528s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-svwtp" [a1d157b1-1c1b-45b0-9c11-6f960b37bd74] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003784182s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-413450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-413450 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-413450 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413450 -n embed-certs-413450
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413450 -n embed-certs-413450: exit status 2 (255.957894ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-413450 -n embed-certs-413450
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-413450 -n embed-certs-413450: exit status 2 (257.013051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-413450 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413450 -n embed-certs-413450
E0210 11:50:03.045962  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/custom-flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-413450 -n embed-certs-413450
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-188461 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-188461 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (49.357020666s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cjk6r" [1cf274b2-734e-4b7a-a0f8-993568e95956] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cjk6r" [1cf274b2-734e-4b7a-a0f8-993568e95956] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004580501s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cjk6r" [1cf274b2-734e-4b7a-a0f8-993568e95956] Running
E0210 11:50:32.266428  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/enable-default-cni-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005961448s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-484935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-484935 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-484935 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-484935 --alsologtostderr -v=1: (1.021341796s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-484935 -n no-preload-484935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-484935 -n no-preload-484935: exit status 2 (310.407838ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-484935 -n no-preload-484935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-484935 -n no-preload-484935: exit status 2 (282.780645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-484935 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-484935 -n no-preload-484935
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-484935 -n no-preload-484935
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m9kxt" [e90ca691-1167-44a7-aaa2-669e44e41bce] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m9kxt" [e90ca691-1167-44a7-aaa2-669e44e41bce] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003947671s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m9kxt" [e90ca691-1167-44a7-aaa2-669e44e41bce] Running
E0210 11:50:53.023359  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/addons-176336/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003762732s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-448087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-188461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-188461 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-188461 --alsologtostderr -v=3: (10.351789449s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-448087 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-448087 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087: exit status 2 (260.185436ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087: exit status 2 (251.920445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-448087 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-448087 -n default-k8s-diff-port-448087
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-188461 -n newest-cni-188461
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-188461 -n newest-cni-188461: exit status 7 (67.047057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-188461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-188461 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 11:51:15.138171  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:51:29.764963  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/flannel-804475/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:51:42.840316  116470 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/bridge-804475/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-188461 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (37.8556518s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-188461 -n newest-cni-188461
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-188461 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-188461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-188461 -n newest-cni-188461
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-188461 -n newest-cni-188461: exit status 2 (231.417816ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-188461 -n newest-cni-188461
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-188461 -n newest-cni-188461: exit status 2 (233.467211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-188461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-188461 -n newest-cni-188461
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-188461 -n newest-cni-188461
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                    

Test skip (40/327)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestNetworkPlugins/group/kubenet 2.86
268 TestNetworkPlugins/group/cilium 3.79
281 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-176336 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-804475 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-804475" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.45:8443
name: cert-expiration-038969
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.172:8443
name: running-upgrade-593595
contexts:
- context:
cluster: cert-expiration-038969
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-038969
name: cert-expiration-038969
- context:
cluster: running-upgrade-593595
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-593595
name: running-upgrade-593595
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-038969
user:
client-certificate: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/cert-expiration-038969/client.crt
client-key: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/cert-expiration-038969/client.key
- name: running-upgrade-593595
user:
client-certificate: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/running-upgrade-593595/client.crt
client-key: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/running-upgrade-593595/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-804475

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804475"

                                                
                                                
----------------------- debugLogs end: kubenet-804475 [took: 2.716714682s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-804475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-804475
--- SKIP: TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-804475 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-804475" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.45:8443
name: cert-expiration-038969
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-109271/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.172:8443
name: running-upgrade-593595
contexts:
- context:
cluster: cert-expiration-038969
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-038969
name: cert-expiration-038969
- context:
cluster: running-upgrade-593595
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:33:58 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-593595
name: running-upgrade-593595
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-038969
user:
client-certificate: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/cert-expiration-038969/client.crt
client-key: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/cert-expiration-038969/client.key
- name: running-upgrade-593595
user:
client-certificate: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/running-upgrade-593595/client.crt
client-key: /home/jenkins/minikube-integration/20385-109271/.minikube/profiles/running-upgrade-593595/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-804475

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-804475" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804475"

                                                
                                                
----------------------- debugLogs end: cilium-804475 [took: 3.636436355s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-804475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-804475
--- SKIP: TestNetworkPlugins/group/cilium (3.79s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-305648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-305648
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard